TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 99161

Instruction-Level Parallelism and Parallelizing Compilation

( 18. Apr – 23. Apr, 1999 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/99161

Organisatoren
  • Ch. Lengauer (Passau)
  • D. Arvind (Edinburgh)
  • K. Ebcioglu (IBM Yorktown Heights)
  • K. Pingali (Ithaca)
  • R. Schreiber (HP, Palo Alto)




Motivation

The aim of this seminar is to bring together two research areas which have developed side by side with little exchange of results: instruction-level parallelism (ILP) and parallelizing compilation (PC). The seminar will provide a forum for the exchange of ideas and experiences in parallelization methods. Both areas are dealing with similar issues like

  1. dependence analysis,
  2. synchronous vs. asynchronous parallelism,
  3. static vs. dynamic parallelization,
  4. speculative execution.

However, the different levels of abstraction at which the parallelization takes place call for different techniques and impose different optimization criteria.

In instruction-level parallelism, by nature, the parallelism is invisible to the programmer, since it is infused in program parts

which are atomic at the level of the programming language. The emphasis is on driving the parallelization process by the availability of architectural resources. Static parallelization has been targeted at very large instruction word (VLIW) architectures and dynamic parallelization at superscalar architectures. Heuristics are being applied to achieve good but, in general, suboptimal performance.

In parallelizing compilation, parallelism visible at the level of the programming language must be exposed. The programmer usually aids the parallelization process with program annotations or by putting the program to be parallelized in a certain syntactic form. The emphasis has been on static parallelization methods. One can apply either heuristics or an optimizing algorithm to search for best performance. Resource limitations can be taken into account during the search, or they can be imposed in a later step, e.g., through tiling or partitioning.

The following questions should be discussed in the seminar:

  • What are the respective issues in the treatment of the problems listed above?
  • How can PC unburden the programmer further from considering parallelism -- or should it?
  • In which areas of ILP and PC can optimizing search algorithms for parallelism be useful in practice?
  • One focus of PC in the past has been affine dependences between subscripted variables. In ILP, affinity does not play a central role. Can PC learn from ILP's experience in the treatment of non-affine dependences?
  • What can ILP learn from PC's experience in compiling for asynchronous parallelism?

Teilnehmer
  • Ch. Lengauer (Passau)
  • D. Arvind (Edinburgh)
  • K. Ebcioglu (IBM Yorktown Heights)
  • K. Pingali (Ithaca)
  • R. Schreiber (HP, Palo Alto)