April 18 – 23 , 1999, Dagstuhl Seminar 99161

Instruction-Level Parallelism and Parallelizing Compilation


D. Arvind (Edinburgh), K. Ebcioglu (IBM Yorktown Heights), Ch. Lengauer (Passau), K. Pingali (Ithaca), R. Schreiber (HP, Palo Alto)

For support, please contact

Dagstuhl Service Team


Dagstuhl's Impact: Documents available
Dagstuhl-Seminar-Report 237


The aim of this seminar is to bring together two research areas which have developed side by side with little exchange of results: instruction-level parallelism (ILP) and parallelizing compilation (PC). The seminar will provide a forum for the exchange of ideas and experiences in parallelization methods. Both areas are dealing with similar issues like

  1. dependence analysis,
  2. synchronous vs. asynchronous parallelism,
  3. static vs. dynamic parallelization,
  4. speculative execution.

However, the different levels of abstraction at which the parallelization takes place call for different techniques and impose different optimization criteria.

In instruction-level parallelism, by nature, the parallelism is invisible to the programmer, since it is infused in program parts

which are atomic at the level of the programming language. The emphasis is on driving the parallelization process by the availability of architectural resources. Static parallelization has been targeted at very large instruction word (VLIW) architectures and dynamic parallelization at superscalar architectures. Heuristics are being applied to achieve good but, in general, suboptimal performance.

In parallelizing compilation, parallelism visible at the level of the programming language must be exposed. The programmer usually aids the parallelization process with program annotations or by putting the program to be parallelized in a certain syntactic form. The emphasis has been on static parallelization methods. One can apply either heuristics or an optimizing algorithm to search for best performance. Resource limitations can be taken into account during the search, or they can be imposed in a later step, e.g., through tiling or partitioning.

The following questions should be discussed in the seminar:

  • What are the respective issues in the treatment of the problems listed above?
  • How can PC unburden the programmer further from considering parallelism -- or should it?
  • In which areas of ILP and PC can optimizing search algorithms for parallelism be useful in practice?
  • One focus of PC in the past has been affine dependences between subscripted variables. In ILP, affinity does not play a central role. Can PC learn from ILP's experience in the treatment of non-affine dependences?
  • What can ILP learn from PC's experience in compiling for asynchronous parallelism?


In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.


Download overview leaflet (PDF).

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.


Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.