TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 03211

Adaptivity in Parallel Scientific Computing

( May 18 – May 23, 2003 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/03211

Organizers




Summary

Although progress in parallel and distributed methodologies for scientific computing have been quite remarkable during the past years, this area of computer science remains still active, especially in topics concerning the relationship between performance and aspects such as: irregularity of applications and algorithms, adaptive characteristics of software and hardware, heterogeneity of hardware platforms, and flexibility of programming environments. Recent research activities include development of complex hardware architectures, including storage hierarchies or heterogeneous (parallel and distributed) computing platforms with large numbers of processors, as well as irregular applications that involve complex domain decomposition and hierarchical, adaptive and multi-level organization of computation and data structures.
The corresponding irregular algorithms comprise applications with sparse, block-structured or adaptive data structures, as well as applications with irregular, runtime-dependent computation and control structures.

Over time, to improve scientific applications' performance on sequential machines, several techniques in hardware and algorithm design, such as storage hierarchies and hierarchical domain decomposition, have been introduced. However, the simulation of large irregular problems still requires the use of parallel and distributed environments. The irregular and dynamically changing runtime behavior makes an efficient parallel realization difficult, since the memory access patterns and the evolution of dynamic structures cannot be determined a priori, and therefore, cannot be planned statically. Consequently, an efficient parallel implementation of this class of problems necessitates the exploitation of flexible programming environments as well as techniques to improve scalability.

This seminar was a forum that brought together researchers working in different areas of parallel scientific computing and its applications, to solve scientific and industrially oriented problems. It provided a fertile environment for the participants to meet and exchange ideas, as well as to foster future research collaborations.
Of particular interest was the exchange of experiences in interdisciplinary research projects. Topics covered by this seminar included:

  • parallel numerical algorithms
  • parallel implementation of irregular applications
  • algorithms for memory hierarchies with enhanced locality of memory access
  • libraries for supporting parallel scientific computing
  • mixed task and data parallel executions on large parallel machines
  • performance analysis evaluation and prediction
  • compiler transformations for increasing the locality of memory references
  • dynamic load balancing techniques
  • partitioning and scheduling strategies
  • heterogeneous computing (cluster and grid computing)
  • combination of different programming models for heterogeneous parallel machines

During the seminar, a number of presentations lead to formulation of interesting open questions followed by discussions on optimal integration of adaptivity at various levels of technology in application, algorithms and system development. In the following paragraphs, we summarize a few concepts and ideas for new approaches, methodologies, and future directions that spawned from various talks and discussions.

In recent years, research in modeling and simulation is becoming increasingly important for a wide variety of scientific and engineering disciplines. It addresses the need for developing a safe, dependable and effective information environment, as well as the one for expanding of basic research in revolutionary fields which are of vital importance to our society. As a result, the research community is now faced with new challenges, such as the ones to incorporate additional physics, length scales and time scales, into models for adaptivity, higher fidelity and resolution, or to process variable amount of data from distributed datasets, which in turn place significant demands on software design and hardware implementation. Therefore, there is a need to explore and devise the design of a flexible, robust, and effective {\it vertical integration strategy}, for advanced development of scientific applications. This integration is expected to facilitate an effective fusion of advances in application algorithms, with the ones in programming environments, system software and hardware capabilities, for the purpose to enable terascale modeling and simulation.

The "Tinker-toy Parallel Programming", an interesting approach to building scientific applications via an aggregation of multiple, light-weight toolkits has been introduced during the seminar presentations. A solution to one possible drawback of such an approach, its limited support for adaptive computations, has also been proposed using "Zoltan" - a tool that provides support for adaptive, parallel scientific computations, and easy development for dynamic and adaptive simulations.

The rapid development of an emerging technology in "cluster and grid computing" suggests a need for dynamic distribution of work and data that can be adapted to the runtime behaviour of the algorithm. A solution to that has been proposed, and its design, implementation, and evaluation have been presented using "task pools". In this approach, tasks are dynamically distributed to different processors (within nodes of a SMP, or among nodes on clusters of SMPs), and each task specifies computations to be performed and provides the appropriate data.

Some interesting presentations focussed on improving performance of irregular parallel applications via addressing sources of load imbalance at all levels of irregular behaviour (related to problem, algorithm or systemic factors).
A general purpose tool for dynamic loop scheduling to address the stochastic load variations from a range of sources has also been introduced.

A number of interesting discussions took place regarding recent advances in cluster and grid computing through a successful migration of parallel programs (via checkpointing and fault tolerance). In the future, the migration of parallel programs will allow parallel applications to "surf" the grid and adapt dynamically to its changeable environment.

A few interesting contributions presented challenges in BSP algorithm design, programming and software engineering to address adaptivity in scientific computations. Moreover, there were a few novel ideas and original concepts introduced on language support for irregular problems and adaptivity. The audience was delighted to discuss during the talk, as well as during our evening pleasant moments of get-together, some of the possible breakthroughs that could evolve from these ideas.

In conclusion, this seminar presentations and discussions addressed many complex issues including application requirements for adaptivity in space and time, as well as requirements for improving the capacity to effectively use resources in heterogeneous environments. The seminar topics span and integrate the work of many research areas: from irregular scientific applications, to adaptive algorithms, programming models and tools, problem solving environments for cluster and grid computing, and others.
We believe that these contributions, in addition to talks and many interesting discussions, will inspire the participants to continue their research efforts towards an integrated view of adaptivity, allowing them in this way to make significant contributions to the advancement of science.


Participants
  • Mahadevan Balasubramaniam (Mississippi State University, US)
  • Ioana Banicescu (Mississippi State University, US)
  • Olav Beckmann (Imperial College London, GB)
  • Ricolindo Carino (Mississippi State Univ. - Mississippi State, US)
  • Anthony Chronopoulos (The University of Texas - San Antonio, US)
  • Marco Danelutto (University of Pisa, IT)
  • Eladio D. Gutierrez (University of Malaga, ES)
  • Bruce Hendrickson (Sandia National Labs - Albuquerque, US) [dblp]
  • Judith Hippold (TU Chemnitz, DE)
  • Christoph W. Kessler (Linköping University, SE) [dblp]
  • Matthias Korch (Universität Bayreuth, DE)
  • Matthias Kühnemann (TU Chemnitz, DE)
  • Philipp Lucas (Universität des Saarlandes, DE)
  • John O'Donnell (University of Glasgow, GB)
  • Gabriel Oksa (Slovak Academy of Sciences - Bratislava, SK)
  • Dana Petcu (West University of Timisoara, RO)
  • Thomas Rauber (Universität Bayreuth, DE) [dblp]
  • Gudula Rünger (TU Chemnitz, DE) [dblp]
  • Carsten Scholtes (Universität Bayreuth, DE)
  • Henk Sips (TU Delft, NL)
  • Alexander Tiskin (University of Warwick - Coventry, GB) [dblp]
  • Virginia Torczon (College of William and Mary - Williamsburg, US)
  • Dick van Albada (University of Amsterdam, NL)
  • Helmut Weberpals (TU Hamburg-Harburg, DE)
  • Wolf Zimmermann (Martin-Luther-Universität Halle-Wittenberg, DE)

Keywords
  • distributed computing
  • BSP
  • scientific algorithms
  • task pools
  • cluster
  • grid computing