This title appears in the Scientific Report :
2015
Exploring exascale avenues for Lattice QCD
Exploring exascale avenues for Lattice QCD
Numerical simulations of theories describing the interaction of elementary particles are a key approach for understanding the fundamental forces in nature. In particular, investigation of the strong interaction requires due to its non-perturbative nature computer simulations. The theory, which is...
Saved in:
Personal Name(s): | Pleiter, Dirk (Corresponding author) |
---|---|
Contributing Institute: |
Jülich Supercomputing Center; JSC |
Imprint: |
2015
|
Conference: | 596. WE-Heraeus-Seminar, Bad Honnef (Germany), 2015-09-07 - 2015-09-09 |
Document Type: |
Conference Presentation |
Research Program: |
Supercomputer Facility |
Publikationsportal JuSER |
Numerical simulations of theories describing the interaction of elementary particles are a key approach for understanding the fundamental forces in nature. In particular, investigation of the strong interaction requires due to its non-perturbative nature computer simulations. The theory, which is believed to describe these strong interactions is Quantum Chromodynamics (QCD), can be formulated on a discrete space-time lattice, hence Lattice QCD, and simulated using Monte-Carlo techniques. Progress in the field of Lattice QCD heavily depends on the increased availability of high-performance computing resources.While being able to exploit various massively parallel high-performance computing architectures in the past, on the path towards exascale also Lattice QCD applications will have to face the challenges imposed by future technology roadmaps. After reaching the limit of constant power density while shrinking transistor size, i.e. the with the breakdown of Dennard scaling, the number of execution pipelines continues to increase such, that even for Lattice QCD applications the limits of parallelism achieved through domain decomposition is getting closer. As Moore's scaling continues to hold and performance per device increases, the need for fast memory technologies increases. This will lead to deeper memory hierarchies, which applications need to exploit efficiently. Finally, applications are affected by degrading reliability at system level and thus the need for managing fault-tolerance arises.In this talk we will review future roadmaps for this application domain and analyze resulting future requirements in terms of high-performance computing resources. In a second part of the talk we will investigate the consequences of above mentioned challenges for Lattice QCD applications and explore possible solutions and mitigation strategies. Here we will take recent experience with the design of architectures optimized for Lattice QCD applications, like QPACE, into account and report on results using unconventional architectures based on processing-in-memory paradigm. |