TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 18111

Loop Optimization

( 11. Mar – 16. Mar, 2018 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/18111

Organisatoren

Kontakt



Motivation

Loop optimization is at the heart of effective program optimization – even if the source language is too abstract to contain loop constructs explicitly as, e.g., in a functional style or a domain-specific language. Increasingly, performance goals are not only execution speed, but also throughput, power efficiency or a combination of these and other criteria.

Context. The quick and easy way to optimize a loop nest, still frequently used in practice, is by restructuring the source program, e.g., by permuting, tiling or skewing the loop nest. Beside being laborious and error-prone, this approach favours modifications that can be easily recognized and carried out, but which need not be the most suitable choice. A much better approach is to search automatically for optimization options in a mathematical model of the iteration space, in which all options are equally detectable and the quality of each option can be assessed precisely.

Recently, the polyhedral compilation community has produced a set of robust and powerful libraries that contain a variety of algorithms for the manipulation of Presburger sets, including all standard polyhedral compilation techniques. They can be incorporated in a program analysis to make other compiler optimizations more precise and powerful, like optimizers and code generators for domain-specific languages, or aggressive optimizers for high-performance computing.

Polyhedral loop optimization relies on strict constraints on the structure of the loop nest and may incur a computationally complex program analysis, based on integer linear programming. The optimization problems become much simpler when information at load or run time is available, i.e., the optimization is done just in time. Also, the search for the best optimization can be supported by other techniques, e.g., auto-tuning, machine learning or genetic algorithms. While these techniques are all fully automatic, engineering of software with robust performance characteristics requires programmers to have some level of explicit control over the data distribution and communication costs. However, manually optimized code is far too complicated to maintain. Thus, a major research area concerns the design of tools that allow developers to guide or direct analysis (e.g., via dependence summaries or domain-specific code generation) and optimization (e.g., via directives, sketches and abstractions for schedules and data partitioning).

This seminar. This seminar will foster a major new synergy in loop optimization research. The key unifying idea is to formulate loop optimization as a mathematical problem, by characterizing the optimization space and objectives with respect to a suitable model.

Participants in the seminar will span some of the major different schools of thought in this field.

One such school is focused on reasoning about scheduling and parallelization using a geometric, “polyhedral”, model of iteration spaces which supports powerful tools for measuring parallelism, locality and communication – but which is quite limited in its applicability.

Another major school treats program optimization as program synthesis, for example by equational rewriting, generating a potentially large space of variants which can be pruned with respect to properties like load balance and locality. This approach has flourished in certain application domains, but also suffers from problems with generalization.

A third family of loop optimization approaches tackles program optimization through program generation and symbolic evaluation. Generative approaches, such as explicit staging, support programmers in taking explicit control over implementation details at a high level of abstraction.

The seminar’s goal is to explore the interplay of these various loop optimization techniques and to consolidate a wider research community of model-based loop optimization. Participants will be representatives of the various loop optimization approaches but also representatives of application domains in high-performance computing.

Copyright Sebastian Hack, Paul H. J. Kelly, and Christian Lengauer

Summary

Motivation

Loop optimization is at the heart of effective program optimization - even if the source language is too abstract to contain loop constructs explicitly as, e.g., in a functional style or a domain-specific language. Loops provide a major opportunity to improve the performance of a program because they represent compactly a large volume of accessed data and executed instructions. Because the clock frequency of processors fails to continue to grow (end of Dennard scaling), the only way in which the execution of programs can be accelerated is by increasing their throughput with a compiler: by increasing parallelism and improving data locality. This puts loop optimization in the center of performance optimization.

Context

The quick and easy way to optimize a loop nest, still frequently used in practice, is by restructuring the source program, e.g., by permuting, tiling or skewing the loop nest. Beside being laborious and error-prone, this approach favors modifications that can be easily recognized and carried out, but which need not be the most suitable choice. A much better approach is to search automatically for optimization options in a mathematical model of the iteration space, in which all options are equally detectable and the quality of each option can be assessed precisely.

Recently, the polyhedral compilation community has produced a set of robust and powerful libraries that contain a variety of algorithms for the manipulation of Presburger sets, including all standard polyhedral compilation techniques. They can be incorporated in a program analysis to make other compiler optimizations more precise and powerful, like optimizers and code generators for domain-specific languages, or aggressive optimizers for high-performance computing.

Polyhedral loop optimization relies on strict constraints on the structure of the loop nest and may incur a computationally complex program analysis, based on integer linear programming. The optimization problems become much simpler when information at load or run time is available, i.e., the optimization is done just-in-time. Also, the search for the best optimization can be supported by other techniques, e.g., auto-tuning, machine learning or genetic algorithms. While these techniques are all fully automatic, engineering of software with robust performance characteristics requires programmers to have some level of explicit control over the data distribution and communication costs. However, manually optimized code is far too complicated to maintain. Thus, a major research area concerns the design of tools that allow developers to guide or direct analysis (e.g., via dependence summaries or domain-specific code generation) and optimization (e.g., via directives, sketches and abstractions for schedules and data partitioning).

Goals

The goal of this seminar was to generate a new synergy in loop optimization research by bringing together representatives of the major different schools of thought in this field. The key unifying idea is to formulate loop optimization as a mathematical problem, by characterizing the optimization space and objectives with respect to a suitable model.

One school is focused on reasoning about scheduling and parallelization using a geometric, "polyhedral", model of iteration spaces which supports powerful tools for measuring parallelism, locality and communication -- but which is quite limited in its applicability.

Another major school treats program optimization as program synthesis, for example by equational rewriting, generating a potentially large space of variants which can be pruned with respect to properties like load balance and locality. This approach has flourished in certain application domains, but also suffers from problems with generalization.

A third family of loop optimization approaches tackles program optimization through program generation and symbolic evaluation. Generative approaches, such as explicit staging, support programmers in taking explicit control over implementation details at a high level of abstraction.

The seminar explored the interplay of these various loop optimization techniques and fostered the communication in the wide-ranging research community of model-based loop optimization. Participants represented the various loop optimization approaches but also application domains in high-performance computing.

Conclusions

The seminar succeeded in making the participants aware of common goals and relations between different approaches. Consensus emerged on the potential and importance of tensor contractions and tensor comprehensions as an intermediate representation. There was also some excitement in connecting the classical dependence-based optimization with newly emerging ideas in deriving parallel algorithms from sequentially-dependent code automatically. Guided automatic search and inference turned out to be a dominant theme. Another important insight was that the optimization criteria currently in use are often too coarse-grained and do not deliver satisfactory performance. More precise hardware models are needed to guide optimization. This will require a closer collaboration with the performance modeling and engineering community.

It was agreed that publications and collaborations fueled by the seminar will acknowledge Schloss Dagstuhl.

Copyright Sebastian Hack, Paul H. J. Kelly, and Christian Lengauer

Teilnehmer
  • Cédric Bastoul (University of Strasbourg, FR) [dblp]
  • Barbara M. Chapman (Stony Brook University, US) [dblp]
  • Shigeru Chiba (University of Tokyo, JP) [dblp]
  • Charisee Chiw (University of Chicago, US) [dblp]
  • Philippe Clauss (University of Strasbourg, FR) [dblp]
  • Albert Cohen (ENS - Paris, FR) [dblp]
  • James W. Demmel (University of California - Berkeley, US) [dblp]
  • Johannes Doerfert (Universität des Saarlandes, DE) [dblp]
  • Andi Drebes (University of Manchester, GB) [dblp]
  • Paul Feautrier (ENS - Paris, FR) [dblp]
  • Stefan Ganser (Universität Passau, DE) [dblp]
  • Tobias Grosser (ETH Zürich, CH) [dblp]
  • Armin Größlinger (Universität Passau, DE) [dblp]
  • Sebastian Hack (Universität des Saarlandes, DE) [dblp]
  • Julian Hammer (Universität Erlangen-Nürnberg, DE) [dblp]
  • Frank Hannig (Universität Erlangen-Nürnberg, DE) [dblp]
  • Alexandra Jimborean (Uppsala University, SE) [dblp]
  • Paul H. J. Kelly (Imperial College London, GB) [dblp]
  • Sriram Krishnamoorthy (Pacific Northwest National Lab. - Richland, US) [dblp]
  • Michael Kruse (ENS - Paris, FR) [dblp]
  • Roland Leißa (Universität des Saarlandes, DE) [dblp]
  • Christian Lengauer (Universität Passau, DE) [dblp]
  • Fabio Luporini (Imperial College London, GB) [dblp]
  • Benoit Meister (Reservoir Labs, Inc. - New York, US) [dblp]
  • Lawrence Mitchell (Imperial College London, GB) [dblp]
  • Madan Musuvathi (Microsoft Research - Redmond, US) [dblp]
  • Victor Nicolet (University of Toronto, CA) [dblp]
  • Philip Pfaffe (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Antoniu Pop (University of Manchester, GB) [dblp]
  • Louis-Noël Pouchet (Colorado State University - Fort Collins, US) [dblp]
  • Jonathan Ragan-Kelley (University of California - Berkeley, US) [dblp]
  • P. (Saday) Sadayappan (Ohio State University - Columbus, US) [dblp]
  • Jun Shirako (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Andreas Simbürger (Universität Passau, DE) [dblp]
  • Daniele G. Spampinato (Carnegie Mellon University - Pittsburgh, US) [dblp]
  • Michel Steuwer (University of Glasgow, GB) [dblp]
  • Tianjiao Sun (Imperial College London, GB) [dblp]
  • Nicolas T. Vasilache (Facebook - New York, US) [dblp]
  • Richard M. Veras (Louisiana State Univ. - Baton Rouge, US) [dblp]
  • Sven Verdoolaege (Facebook - Paris, FR) [dblp]
  • Ayal Zaks (Technion - Haifa, IL) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 9616: Loop Parallelization (1996-04-15 - 1996-04-19) (Details)

Klassifikation
  • optimization / scheduling
  • programming languages / compiler
  • software engineering

Schlagworte
  • Autotuning
  • dependence analysis
  • just-in-time (JIT)
  • loop parallelization
  • parallel programming
  • polyhedron model