This title appears in the Scientific Report :
2018
Detecting disaster before it strikes: On the challenges of automated building and testing in HPC environments
Detecting disaster before it strikes: On the challenges of automated building and testing in HPC environments
Abstract Software reliability is one of the corner stones of any successful user experience. Software needs to build up users’ trust in its fitness for a specific purpose. Software failures undermine this trust and add to user frustration that will ultimately lead to a termination of usage. Even bey...
Saved in:
Personal Name(s): | Feld, Christian (Corresponding author) |
---|---|
Geimer, Markus / Hermanns, Marc-André / Saviankou, Pavel / Visser, Anke / Mohr, Bernd | |
Contributing Institute: |
Jülich Supercomputing Center; JSC |
Imprint: |
2018
|
Conference: | 12th International Parallel Tools Workshop, Stuttgart (Germany), 2018-09-17 - 2018-09-18 |
Document Type: |
Talk (non-conference) |
Research Program: |
Computational Science and Mathematical Methods |
Publikationsportal JuSER |
Abstract Software reliability is one of the corner stones of any successful user experience. Software needs to build up users’ trust in its fitness for a specific purpose. Software failures undermine this trust and add to user frustration that will ultimately lead to a termination of usage. Even beyond user expectations on the robustness of a software package, today’s scientific software is more than a temporary research prototype. It also forms the bedrock for successful scientific research in the future. A well-defined software engineering process that includes automated builds and tests is a key enabler for keeping software reliable in an agile scientific environment and should be of vital interest for any scientific software development team. While automated builds and deployment as well as systematic software testing have become common practice when developing software in industry, it is rarely used for scientific software, including tools. Potential reasons are that (1) in contrast to computer scientists, domain scientists from other fields usually never get exposed to such techniques during their training, (2) building up the necessary infrastructures is often considered overhead that distracts from the real science, (3) interdisciplinary research teams are still rare, and (4) high-performance computing systems and their programming environments are less standardized, such that published recipes can often not be applied without heavy modification. In this work, we will present the various challenges we encountered while setting up an automated building and testing infrastructure for the Score-P, Scalasca, and Cube projects. We will outline our current approaches, alternatives that have been considered, and the remaining open issues that still need to be addressed. We will also present early results regarding the benefits of using this infrastructure, which should result in a better software quality, and thus, ultimately in an improved user experience. |