TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 99161

Instruction-Level Parallelism and Parallelizing Compilation

( Apr 18 – Apr 23, 1999 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/99161

Organizers
  • Ch. Lengauer (Passau)
  • D. Arvind (Edinburgh)
  • K. Ebcioglu (IBM Yorktown Heights)
  • K. Pingali (Ithaca)
  • R. Schreiber (HP, Palo Alto)




Motivation

The aim of this seminar is to bring together two research areas which have developed side by side with little exchange of results: instruction-level parallelism (ILP) and parallelizing compilation (PC). The seminar will provide a forum for the exchange of ideas and experiences in parallelization methods. Both areas are dealing with similar issues like

  1. dependence analysis,
  2. synchronous vs. asynchronous parallelism,
  3. static vs. dynamic parallelization,
  4. speculative execution.

However, the different levels of abstraction at which the parallelization takes place call for different techniques and impose different optimization criteria.

In instruction-level parallelism, by nature, the parallelism is invisible to the programmer, since it is infused in program parts

which are atomic at the level of the programming language. The emphasis is on driving the parallelization process by the availability of architectural resources. Static parallelization has been targeted at very large instruction word (VLIW) architectures and dynamic parallelization at superscalar architectures. Heuristics are being applied to achieve good but, in general, suboptimal performance.

In parallelizing compilation, parallelism visible at the level of the programming language must be exposed. The programmer usually aids the parallelization process with program annotations or by putting the program to be parallelized in a certain syntactic form. The emphasis has been on static parallelization methods. One can apply either heuristics or an optimizing algorithm to search for best performance. Resource limitations can be taken into account during the search, or they can be imposed in a later step, e.g., through tiling or partitioning.

The following questions should be discussed in the seminar:

  • What are the respective issues in the treatment of the problems listed above?
  • How can PC unburden the programmer further from considering parallelism -- or should it?
  • In which areas of ILP and PC can optimizing search algorithms for parallelism be useful in practice?
  • One focus of PC in the past has been affine dependences between subscripted variables. In ILP, affinity does not play a central role. Can PC learn from ILP's experience in the treatment of non-affine dependences?
  • What can ILP learn from PC's experience in compiling for asynchronous parallelism?

Participants
  • Ch. Lengauer (Passau)
  • D. Arvind (Edinburgh)
  • K. Ebcioglu (IBM Yorktown Heights)
  • K. Pingali (Ithaca)
  • R. Schreiber (HP, Palo Alto)