Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Innerhalb dieser Seite:
Externe Seiten:
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp

Dagstuhl-Seminar 9616

Loop Parallelization

( 15. Apr – 19. Apr, 1996 )

Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite:

  • Ch. Lengauer
  • H. Zima
  • L. Thiele
  • M. Wolfe


As parallelism emerges as a viable and important concept of computer technology, the automatic parallelization of loops is becoming increasingly important and receiving increased attention from researchers. The reasons are (1) that programming parallel computers by hand is impractical in all but the simplest applications and (2) that, by exploiting the parallelism in nested loops, potentially large numbers of processors can be utilized easily and a speed-up of orders of magnitude can be attained.

Methods of loop parallelization have been developed in two research communities: regular array design and parallelizing compilation.
Researchers in regular array design impose regularity constraints on loop nests in order to apply a geometric model in which the set of all parallelizations of the source loop nest can be characterized, the quality of each member of the set can be assessed and an optimal choice from the set can be made automatically. That is, regular array design methods identify optimal parallelizations of regular loop nests.
Researchers in parallelizing compilation are interested in faster methods than their colleagues in regular array design and, therefore, often apply heuristics to attain reasonable but not necessarily optimal parallelizations. Parallelizing compilation methods can often cope with less regular loop nests but do, in general, not produce provably optimal results.

The primary goal of this seminar was to intensify communication between the two communities. In recent years, the methods used in both communities have increasingly converged on the theory of linear algebra and linear programming.

Questions discussed at the seminar included:

  • Algorithms that yield optimal parallelizations are usually computationally complex. For what applications is this (not) a serious restriction? For what applications do heuristic algorithms yield better performance, and what are the heuristics?
  • Loop parallelization methods have yielded static parallelism in the past. How can they be made more dynamic, e.g., for the treatment of while loops or of irregular data structures?
  • What parallel programmable computer architectures should the research in loop parallelization aim at?
  • What do the users of parallelizing compilers expect from loop parallelization?
  • What are the special requirements on design methods for multi-media applications?
  • How can memory management in parallelized programs be made more efficient?

The 41 participants of the workshop came from 9 countries: 14 from the US (funded by the National Science Foundation), 10 from France, 9 from Germany and 8 from other European countries. The organizers would like to thank everyone who has helped to make this workshop a success.


  • Ch. Lengauer
  • H. Zima
  • L. Thiele
  • M. Wolfe

Verwandte Seminare
  • Dagstuhl-Seminar 18111: Loop Optimization (2018-03-11 - 2018-03-16) (Details)