TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 9036

Functional Languages: Optimization for Parallelism

( 03. Sep – 07. Sep, 1990 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/9036

Organisatoren
  • C. Hankin
  • R. Wilhelm



Overview

The seminar emphasised four main areas:

  1. Static analysis and program transformation
  2. Abstract machines and compilation
  3. Architectures to support the parallel evaluation of functional languages
  4. Pragmatics for the control of evaluation order

The majority of the talks concerned the first two topics.

(1)Static analysis and program transformation

Few of the static analysis techniques which have been proposed for functional languages have been concerned with the discovery of opportunities for parallel evaluation or the control of parallel evaluation. The seminar provided a setting for interaction between the theorists working on analysis techniques and implementors. As a result of discussion, a number of analyses were identified as being of interest; these include:

  • Detection of Sharing between parallel threads of computation
  • Compile-time analysis of lifetimes
  • An analysis to remove redundant Sparks in GRIP-like architectures
  • Usage analysis
  • An analysis to support compile-time load balancing

A more detailed list, which was compiled by Thomas Johnsson, is included in the next section. Another important issue that was raised was how such analyses could be combined.

In contrast, several speakers reported on experiments in program transformation which are directed at better exploitation of parallel hardware. The essence of this approach is a library of higher order functions (e.g. scan) which are suited for implementation on a particular parallel machine model and a transformation algebra for transforming general programs into the required form. This is extremely important work in the light of our concluding remarks is in the Dagstuhl-Seminar-Report.

(2)Abstract machines and compilation

While there is wide variance between the level of abstraction employed in the different abstract machines, it is possible to identify some emerging trends.

Four of the talks focussed on Term Graph Rewriting, a rewriting formalism invented by Barendregt and co-workers which explicitly captures sharing. The advantage of this approach is that it is readily formalisable, it is an abstraction of the mechanisms involved in many graph reduction machines and so can provide a formal basis for reasoning about the correctness of analysis and transformation techniques.

Certain considerations such as copying of graph structure are not easily describable in Term Graph Rewriting. At a lower level, much of the abstract machine work is based on refinements of the G-machine which has become established as a standard. The most important refinements are ßpinelessnessänd taglessness"; the former involves caching the spine of the graph on the stack so that the whole expression graph does not have to be rebuilt at each step, the latter avoids the need for some indirection by storing code pointers with. values rather than type tags.

(3)Architectures

The key discussion between the architects continues to be the fine grain versus coarse grain dispute. The experience of the MIT MONSOON system gives encouraging evidence to support the fine grain approach but offset against this is the early experience with Glasgow's GRIP which requires throttling and a more coarse grain approach. Some of the talks on program transformation suggest that this issue can be hidden from the high-level user. Which class of architecture is "batteries" clearly dependent on the likely job mix.

Pragmatics

Finally, it is becoming apparent that achieving high performance from any parallel architecture will involve significant programmer intervention. The form of intervention may be guidance to a program transformation system or the insertion of control annotations. The need for annotations should come as no surprise given our experience of imperative languages, however the situation is different here since the annotations are semantics-preserving and there is room for optimism that they could be automatically generated by smart static analysis tools.


Teilnehmer
  • C. Hankin
  • R. Wilhelm