Enormous growth in computing power and advances in parallel algorithms are enabling the realistic simulation of complex systems of the physical world. Computer simulations -- that is high accuracy virtual models of the real world -- have begun to replace expensive or dangerous experiments. Computer simulations even allow to experiment with systems and processes which are not open to real experiments (like cosmological, economical, or sociological systems).
Computer simulation is quickly becoming a universal methodology. Examples include weather prediction, climate modeling, astrophysics, turbulence, combustion, biomedical technology, financial engineering, material sciences, environmental modeling, and waste management. Other strategic fields are protein folding, macromolecule and drug design, quantum chemistry, reactive fluid flow, logistic systems, plasma and fusion physics, aerodynamics, superconductivity, string-theoretical problems, and quantumchromodynamics.
The seminar has focussed on simulation as a tool for computational science and engineering applications. To be a useful tool, such simulations must be based on accurate mathematical descriptions of the processes and thus they involve mathematical formulations, like partial differential equations or integral equations. Scientific simulations require the numerical solution of such problems and thus will use enormous resources in both processing power and storage. Even more computing power is needed when the simulation is used only as a component within a more complex task. This happens, e.g. when an engineering design is automatically optimized. In this case a simulation run must be performed within each iteration of the optimization algorithm.
Despite rapid progress over the past three decades, the practical use of high performance simulation and its applications will be facing several severe obstacles within the next decade. Desirable, realistic models are still too compute intensive for current processing technology. While the fastest computers today may be able to handle simulations with at most 199 - 1011 degrees of freedom and perform in the range of a few Teraflop (1012 operations) per second, the next generation of models will require up to three orders of magnitude more computing power. Current roadmaps predict the availability of petaflops systems, capable of 1015 operations per second by the end of the current decade. Such systems are necessarily massively parallel.
The seminar talks have covered topics including
- scalable parallel simulation algorithms
- Numerical methods
- the architecture of scalable massively parallel systems
- multiple levels of parallelism, from instruction or task level to message passing in a networked cluster
- devising algorithms and implementation techniques capable to tolerate latency and bandwidth restrictions of future petaflop systems
- software engineering techniques for computational science and engineering applications
- problem solving environments
- handling the complexity of multi-physics models
- validation and verification of large scale simulations
- alternatives to silicon-based computing
The seminar has brought together researchers from across the disciplines who are involved in all aspects of high performance simulation and dealing with the challenges of future petaflops simulations. The discussion across the disciplines, including the hard- and software architecture of the next generation of supercomputers, but with an emphasis on the design of new algorithms, tools, and programming techniques has been especially fruitful. Even mor einterdisciplinary collaboration will be necessary for efficiently exploiting such systems and the manage the enormous complexity of current and future scientific simulation problems.
The results of the seminar will be published in book form. Draft versions of the four planned, multi-authored chapters are available for download.
- Krister Aahlander (Uppsala University, SE)
- Martin Berzins (University of Utah - Salt Lake City, US)
- Kurt Binder (Universität Mainz, DE)
- Martin Bücker (RWTH Aachen, DE) [dblp]
- Hans-Joachim Bungartz (TU München, DE) [dblp]
- Craig C. Douglas (University of Kentucky, US)
- Andreas Frommer (Universität Wuppertal, DE) [dblp]
- Michael Gerndt (TU München, DE) [dblp]
- Robert J. Harrison (Oak Ridge National Laboratory, US) [dblp]
- Magne Haveraaen (University of Bergen, NO)
- Friedel Hoßfeld (Jülich Supercomputing Centre, DE)
- Frank Hülsemann (Cerfacs - Toulouse, FR)
- Elizabeth Jessup (University of Colorado - Boulder, US)
- Christopher R. Johnson (University of Utah - Salt Lake City, US) [dblp]
- Markus Kowarschik (Universität Erlangen-Nürnberg, DE)
- Hans Petter Langtangen (University of Oslo, NO)
- Malin Ljungberg (Uppsala University, SE)
- Arnd Meyer (TU Chemnitz, DE)
- Hans Munthe-Kaas (University of Bergen, NO)
- Aasmund Odegard (Simula Research Laboratory - Lysaker, NO)
- Leif Persson (Swedish Defence Reseach Agency - Umeå, SE)
- Christoph Pflaum (Universität Würzburg, DE)
- Thomas Pohl (Universität Erlangen-Nürnberg, DE)
- Padma Raghavan (Pennsylvania State University - University Park, US) [dblp]
- Ulrich Rüde (Universität Erlangen-Nürnberg, DE) [dblp]
- Klas Samuelsson (Fraunhofer Chalmers - Göteborg, SE)
- Karsten Scholtyssik (Jülich Supercomputing Centre, DE)
- Joakim Sundnes (Simula Research Laboratory - Lysaker, NO)
- Michael Thuné (Uppsala University, SE)
- Stefan Turek (TU Dortmund, DE) [dblp]
- Gerhard Zumbusch (Universität Jena, DE)