Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 9616

Loop Parallelization

( Apr 15 – Apr 19, 1996 )

Please use the following short url to reference this page:

  • Ch. Lengauer
  • H. Zima
  • L. Thiele
  • M. Wolfe


As parallelism emerges as a viable and important concept of computer technology, the automatic parallelization of loops is becoming increasingly important and receiving increased attention from researchers. The reasons are (1) that programming parallel computers by hand is impractical in all but the simplest applications and (2) that, by exploiting the parallelism in nested loops, potentially large numbers of processors can be utilized easily and a speed-up of orders of magnitude can be attained.

Methods of loop parallelization have been developed in two research communities: regular array design and parallelizing compilation.
Researchers in regular array design impose regularity constraints on loop nests in order to apply a geometric model in which the set of all parallelizations of the source loop nest can be characterized, the quality of each member of the set can be assessed and an optimal choice from the set can be made automatically. That is, regular array design methods identify optimal parallelizations of regular loop nests.
Researchers in parallelizing compilation are interested in faster methods than their colleagues in regular array design and, therefore, often apply heuristics to attain reasonable but not necessarily optimal parallelizations. Parallelizing compilation methods can often cope with less regular loop nests but do, in general, not produce provably optimal results.

The primary goal of this seminar was to intensify communication between the two communities. In recent years, the methods used in both communities have increasingly converged on the theory of linear algebra and linear programming.

Questions discussed at the seminar included:

  • Algorithms that yield optimal parallelizations are usually computationally complex. For what applications is this (not) a serious restriction? For what applications do heuristic algorithms yield better performance, and what are the heuristics?
  • Loop parallelization methods have yielded static parallelism in the past. How can they be made more dynamic, e.g., for the treatment of while loops or of irregular data structures?
  • What parallel programmable computer architectures should the research in loop parallelization aim at?
  • What do the users of parallelizing compilers expect from loop parallelization?
  • What are the special requirements on design methods for multi-media applications?
  • How can memory management in parallelized programs be made more efficient?

The 41 participants of the workshop came from 9 countries: 14 from the US (funded by the National Science Foundation), 10 from France, 9 from Germany and 8 from other European countries. The organizers would like to thank everyone who has helped to make this workshop a success.


  • Ch. Lengauer
  • H. Zima
  • L. Thiele
  • M. Wolfe

Related Seminars
  • Dagstuhl Seminar 18111: Loop Optimization (2018-03-11 - 2018-03-16) (Details)