TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 06071

Architectures and Algorithms for Petascale Computing

( 12. Feb – 17. Feb, 2006 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/06071

Organisatoren




Press Room

Press Release

Press Reviews

  • Video by Saarländischer Rundfunk Aktueller Bericht: "Computer-Tagung im Nordsaarland"
    DiVX [14.9 MB], WMV [13.8 MB] (Author: Jürgen Rinner; February 16, 2006)
  • Radio broadcast by Radio Berlin-Brandenburg: "Supercomputer - wozu sind sie gut?"
    MP3, rbb-online (Author: Thomas Prinzler; February 21, 2006)

Motivation

This seminar will focus on high end simulation as a tool for computational science and engineering applications. To be useful tools for science, such simulations must be based on accurate mathematical descriptions of the processes and thus they begin with mathematical formulations, such as partial differential equations or integral equations.

Because of the ever-growing complexity of scientific and engineering problems, computational needs continue to increase rapidly. But most of the currently available hardware, software, systems, and algorithms are primarily focused on business applications or smaller scale scientific and engineering problems, and cannot meet the high-end computing needs of cutting-edge scientific and engineering work. This seminar is concerned with peta-scale scientific computation, which are highly computation- and data-intensive, and cannot be satisfied in todays typical cluster environment. The target hosts for these tools are systems comprised of thousands to tens of thousands of processors. By the end of the decade such systems are expected to reach a performance of one Petaflop, that is 1015 floating point operations per second.

The rapid progress over the past three decades in high performance simulation has recently been facing an increasing number of obstacles that are so fundamental that no single solution is in sight. Instead, only a combination of approaches seems to promise the successful transition to petascale simulation.

  • Petaflops systems are necessarily massively parallel. Many simulation codes currently in use (e.g. commercial finite element packages) are hardly scalable on parallel systems at all, but even specifically developed programs cannot be expected to scale successfully to tens of thousands of processing units, as will be used in Petascale systems.
  • Achieving a significant percentage of the processor performance has become increasingly difficult especially for many commodity processors, since the so called memory wall prohibits better efficiency. The compute speed is not matched by the memory performance of such systems. Mainstream computer and CPU architecture is hitting severe limits which may be most noticeable in a high performance computing scenario (but not only there).
  • Further improvements of latency and bandwidth are hitting fundamental limits. At 10 GHz clock rate, light travels for 3 cm in vacuum, but a Petaflop system may be physically 100 m across, so that latencies of several thousand clock cycles are unavoidable for such a system.
  • Similarly, a Petaflop system would ideally have an aggregate bandwidth that, if transported on buses of 128 bit width at a clock rate of 1 GHz, would require in excess of a million of such buses operating in parallel. Therefore not only latency, but even the available bandwidth may become a severe bottleneck.
  • New and innovative hard- and software architectures will be required, but it will not be sufficient to design solutions only on the system level:
  • additionally the design (and implementation) of the algorithms must be revised and adapted.
  • new latency tolerant and bandwidth optimized algorithms must be invented and designed for petascale systems.

The proposed seminar will focus on develping solutions for concurrent and future problems in high end computing. Specifically, these are:

  • innovative hard- and software architectures for petascale computing
  • scalable parallel simulation algorithms, whose complexity must depend only linearly (or almost linearly) on the problem size
  • scalable massively parallel systems and architectures
  • simultaneously using multiple granularity levels of parallelism, from instruction or task level to message passing in a networked cluster
  • devising algorithms and implementation techniques capable of tolerating latency and bandwidth restrictions Petaflop systems
  • tools and techniques for improving the usability of such systems
  • possible alternatives to silicon-based computing

Teilnehmer
  • Steven F. Ashby (LLNL - Livermore, US)
  • David A. Bader (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Peter Bastian (Universität Heidelberg, DE) [dblp]
  • Martin Berzins (University of Utah - Salt Lake City, US)
  • Christian Bischof (RWTH Aachen, DE) [dblp]
  • Rupak Biswas (NASA - Moffett Field, US)
  • Marian Bubak (AGH University of Science & Technology - Krakow, PL) [dblp]
  • Hans-Joachim Bungartz (TU München, DE) [dblp]
  • Jack Joseph Dongarra (University of Tennessee, US) [dblp]
  • Jan Eitzinger (Universität Erlangen-Nürnberg, DE) [dblp]
  • Al Geist (Oak Ridge National Laboratory, US)
  • Michael Gerndt (TU München, DE) [dblp]
  • Omar Ghattas (Univ. of Texas at Austin, US) [dblp]
  • Dominik Göddeke (TU Dortmund, DE) [dblp]
  • William D. Gropp (Argonne National Laboratory, US) [dblp]
  • Manish Gupta (IBM TJ Watson Research Center - Yorktown Heights, US)
  • John Gurd (University of Manchester, GB)
  • Georg Hager (Universität Erlangen-Nürnberg, DE) [dblp]
  • Bruce Hendrickson (Sandia National Labs - Albuquerque, US) [dblp]
  • Frank Hülsemann (EDF - Clamart, FR)
  • David Keyes (Columbia University - New York, US) [dblp]
  • Uwe Küster (Universität Stuttgart, DE)
  • Hans Petter Langtangen (University of Oslo, NO)
  • Steven Parker (University of Utah, US)
  • Rolf Rabenseifner (Universität Stuttgart, DE)
  • Padma Raghavan (Pennsylvania State University - University Park, US) [dblp]
  • Ulrich Rüde (Universität Erlangen-Nürnberg, DE) [dblp]
  • Robert Schreiber (HP - Palo Alto, US)
  • Horst D. Simon (Lawrence Berkeley National Laboratory, US)
  • Peter Sloot (VU University Amsterdam, NL)
  • Marc Snir (University of Illinois - Urbana, US) [dblp]
  • Linda Stals (Australian National University, AU)
  • Thomas Sterling (CalTech - Pasadena, US)
  • Erich Strohmaier (Lawrence Berkeley National Laboratory, US)
  • Jesper Larsson Traff (NEC Europe - St. Augustin, DE)
  • Stefan Turek (TU Dortmund, DE) [dblp]
  • Christoph Überhuber (TU Wien, AT)
  • Gabriel Wittum (Universität Heidelberg, DE) [dblp]
  • Hans Zima (CalTech - Pasadena, US)