TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 22101

Tensor Computations: Applications and Optimization

( Mar 06 – Mar 11, 2022 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/22101

Organizers

Contact


Summary

Linear relationships between quantities are one of the most fundamental and pervasive phenomena in mathematics, science and computing. While matrices encode linear relationships between exactly two quantities, tensors are an abstraction representing linear relationships between multiple variables. Tensor computations therefore provide an abstract language for computations that span an enormous range of application domains, including machine learning, quantum information systems, simulations based on solving partial differential equations, computational chemistry and beyond. The tensor abstraction enriches our understanding of the structure of computations, and exposes common challenges and solutions that cut across different research communities.

While the mathematics of tensors is well-developed and extensively applied across all of these applications and beyond, there is far less commonality in the software abstractions and tools deployed to execute tensor computations. This is in stark contrast to matrix computations, where common abstractions and stable interfaces have led to widely used tools that bring high performance to across diverse application domains.

This Seminar explored this challenge, and made significant progress towards establishing foundations for common implementations -- embodying the substantial body of knowledge on high-performance tensor computation strategies in common software libraries and domain-specific program generation tools.

The Seminar began with five tutorial lectures, offered by the organisers in partnership with selected leading figures in some of the relevant communities. We began by mapping some of the diverse terminology. We then provided tutorials exposing the quantitative and qualitative diversity in how different communities use tensor computations -- aiming to build a common understanding of key concepts, notations, and building blocks. We focused on the following application areas:

  1. Quantum physics and chemistry
  2. Mesh-based discretisations for solution of partial differential equations
  3. Machine learning.

The final tutorial reviewed the challenge of establishing unifying software tools, highlighting the enormous body of work that has been done within application areas.

The second phase of the Seminar consisted of more detailed presentations from the participants. These included motivating applications, but focusing on the fundamental computational workloads, methods, and performance challenges. Building on this, we also had contributions focused on implementation -- low-level performance considerations, algorithmic proposals, compiler algorithms and compiler infrastructure.

In the third phase of the Seminar, we separated into three teams. One explored benchmarking and datasets, another made substantial progress on proof-of-concept implementation work to connecting the high-level Tensorly library for tensor decompositions in machine learning to a lower-level tensor-vector products -- achieving considerable performance advantage. Finally there was also a major and continuing effort to define a common domain-specific language and compiler representation for tensor contractions that supports both high-level optimisations and the use of high-performance low-level libraries.

This 2021 seminar built on progress made at an earlier seminar with the same title, in March 2020 -- which was very heavily impacted by the coronavirus pandemic. This seminar was also affected, to a lesser extent -- with a reduced number of on-site participants, partly compensated by very useful engagement with researchers joining online, albeit from distant timezones.

This seminar benefited from broader engagement with application domains -- partly as a result of the work that was done on tutorials -- which we hope to publish in due course. It also benefited from deeper engagement with developers of high-performance building blocks. Finally, we initiated a new and continuing effort to define a common language and a common intermediate language for code generation tools.

Copyright Paolo Bientinesi, David Ham, Furong Huang, Paul H. J. Kelly, and P. (Saday) Sadayappan

Motivation

Tensors are higher dimensional analogs of matrices, and represent a key data abstraction for many applications in computational science and data science. Widely used shared infrastructure exists for linear algebra, while, in contrast, for tensor computations, there is no consensus on standard building blocks. This Dagstuhl Seminar aims to bring together users, and performance optimization specialists, to build such foundations.

Tensor computations are important in a wide range of application domains, including, among others:

  • Physics – notably in quantum information theory and tensor network models in quantum many-body systems
  • Chemistry – notably in electronic structure calculations, for example using coupled-cluster methods
  • Mechanics – notably in numerical methods for solution of partial differential equations
  • Machine learning - notably both as a language for deep learning and also as a framework for multidimensional data analysis

The development of common language for tensor contractions, tensor networks, tensor decompositions, and the associated numerical methods, is yielding deep insight and cross-fertilization. Furthermore, several concurrent efforts have targeted the development of libraries, frameworks, and domain-specific compilers to support the rising demand for high-performance tensor computations.

This seminar aims to realize the potential in a new emerging recognition of the common foundations that underpin tensor computations across these very diverse domains. There is huge opportunity through coordination among the various communities. Development of high-performance libraries and code generation frameworks for tensor computations can be shaped by improved interaction with the research communities that develop applications using tensors as their key data abstraction.

This seminar builds on Seminar 20111 (March 2020) of the same name, which operated on a reduced scale due to the coronavirus pandemic. It will bring together researchers whose focus is the application of tensor computations, and researchers developing software infrastructure for efficient tensor computation primitives, including experts in high-performance computing, high-performance machine learning, compiler optimization, and in numerical methods across the spectrum of application areas.

A very fruitful exchange of ideas is anticipated, with discussions on the variety of needs and use-cases for tensor computations and the challenges/opportunities in the development of high-performance software to satisfy those needs.

Copyright Paolo Bientinesi, David Ham, Furong Huang, Paul H. J. Kelly, and P. (Saday) Sadayappan

Participants
On-site
  • Cem Bassoy (Fraunhofer IOSB - Ettlingen, DE)
  • Paolo Bientinesi (University of Umeå, SE)
  • Simon Bonér (University of Umeå, SE)
  • Albert Cohen (Google - Paris, FR) [dblp]
  • Jeremy E. Cohen (CNRS - IRISA - Rennes, FR)
  • Teodoro Collin (MIT - Cambridge, US)
  • Jutho Haegeman (Ghent University, BE)
  • David Ham (Imperial College London, GB) [dblp]
  • Paul H. J. Kelly (Imperial College London, GB) [dblp]
  • Thomas Koehler (University of Glasgow, GB)
  • Lawrence Mitchell (Durham University, GB) [dblp]
  • Christos Psarras (RWTH Aachen, DE)
  • Norman Rink (Google DeepMind - London, GB)
  • P. (Saday) Sadayappan (University of Utah - Salt Lake City, US) [dblp]
  • Paul Springer (NVIDIA Corp. - Santa Clara, US)
  • Edward Stow (Imperial College London, GB)
  • Volker Tresp (Siemens - München, DE) [dblp]
  • Bora Uçar (ENS - Lyon, FR) [dblp]
  • Carsten Uphoff (Intel Deutschland GmbH - Feldkirchen, DE)
  • Edward Valeev (Virginia Tech - Blacksburg, US)
  • Sophia Vorderwuelbecke (Imperial College London, GB)
  • Connor Ward (Imperial College London, GB)
Remote:
  • Muthu Manikandan Baskaran (Qualcomm Technologies - New York, US)
  • Charisee Chiw (Google - San Francisco, US) [dblp]
  • Nadav Cohen (Tel Aviv University, IL)
  • Edoardo Di Napoli (Jülich Supercomputing Centre, DE)
  • Rong Ge (Duke University - Durham, US)
  • Johnnie Gray (Caltech - Pasadena, US)
  • Vinod Grover (NVIDIA - Redmond, US) [dblp]
  • Furong Huang (University of Maryland - College Park, US)
  • Katharina Kormann (Uppsala University, SE)
  • Jean Kossaifi (NVIDIA - Redmond, US)
  • Jiajia Li (Pacific Northwest National Lab. - Richland, US)
  • Devin Matthews (SMU - Dallas, US)
  • Luke Panayi (Imperial College London, GB)
  • Vivek Srikumar (University of Utah - Salt Lake City, US) [dblp]
  • Edwin Miles Stoudenmire (Flatiron Institute - New York, US)
  • Richard M. Veras (University of Oklahoma - Norman, US) [dblp]
  • Qi (Rose) Yu (University of California - San Diego, US)
  • Pan Zhang (Chinese Academy of Sciences - Beijing, CN)

Related Seminars
  • Dagstuhl Seminar 20111: Tensor Computations: Applications and Optimization (2020-03-08 - 2020-03-13) (Details)

Classification
  • Computational Engineering / Finance / and Science
  • Machine Learning
  • Mathematical Software

Keywords
  • compilers
  • computational science
  • linear algebra
  • machine learning
  • numerical methods