This title appears in the Scientific Report :
2023
Please use the identifier:
http://dx.doi.org/10.5281/ZENODO.10228432 in citations.
JUBE (v2.6.1)
JUBE (v2.6.1)
Benchmarking a computer system usually involves numerous tasks, involving several runs of different applications. Configuring, compiling, and running a benchmark suite on several platforms with the accompanied tasks of result verification and analysis needs a lot of administrative work and produces...
Saved in:
Personal Name(s): | Breuer, Thomas (Corresponding author) |
---|---|
Wellmann, Julia / Souza Mendes Guimarães, Filipe / Himmels, Carina / Luehrs, Sebastian | |
Contributing Institute: |
Jülich Supercomputing Center; JSC |
Imprint: |
2023
|
DOI: |
10.5281/ZENODO.10228432 |
Document Type: |
Software |
Research Program: |
SiVeGCS Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups |
Edition: | 2.6.1 |
Publikationsportal JuSER |
Benchmarking a computer system usually involves numerous tasks, involving several runs of different applications. Configuring, compiling, and running a benchmark suite on several platforms with the accompanied tasks of result verification and analysis needs a lot of administrative work and produces a lot of data, which has to be analysed and collected in a central database. Without a benchmarking environment all these steps have to be performed by hand. For each benchmark application the benchmark data is written out in a certain format that enables the benchmarker to deduct the desired information. This data can be parsed by automatic pre- and post-processing scripts that draw information, and store it more densely for manual interpretation. The JUBE workflow and benchmarking environment provides a script based framework to easily create benchmark sets, run those sets on different computer systems and evaluate the results. It is actively developed by the Jülich Supercomputing Centre of Forschungszentrum Jülich, Germany. |