TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 20111

Tensor Computations: Applications and Optimization

( Mar 08 – Mar 13, 2020 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/20111

Organizers

Coordinators

Contact


Motivation

Tensors are higher dimensional analogs of matrices, and represent a key data abstraction for many applications in computational science and data science. In contrast to the wide availability on diverse hardware platforms of high-performance numerical libraries for matrix computations, only limited software infrastructure exists today for high-performance tensor computations.

Recent research developments have resulted in the formulation of many machine learning algorithms in terms of tensor computations. Tensor computations have also emerged as fundamental building blocks for many algorithms in data science and computational science. Therefore, several concurrent efforts have targeted the development of libraries, frameworks, and domain-specific compilers to support the rising demand for high-performance tensor computations. However, there is currently very little coordination among the various groups of developers. Further, the groups developing high-performance libraries/frameworks for tensor computations are still rather disconnected from the research community that develops applications using tensors as a key data abstraction.

The main goal of this Dagstuhl Seminar is to bring together the following two communities: first researchers from disciplines developing applications centered around tensor computations, and second researchers developing software infrastructure for efficient tensor computation primitives. Invitees from the former group will include experts in machine learning and data analytics, and computational scientists developing tensor-based applications. Invitees from the latter group will span experts in compiler optimization and experts in numerical methods.

A very fruitful exchange of ideas across these four research communities is anticipated, with discussions on the variety of needs and use-cases for tensor computations and the challenges/opportunities in the development of high-performance software to satisfy those needs.

Copyright Paolo Bientinesi, Furong Huang, Paul H. J. Kelly, and P. (Saday) Sadayappan

Summary

This seminar was planned for 40 participants, but due to travel restrictions resulting from Covid-19, only 15 were able to attend - though several key talks were delivered via teleconferencing. As a result, the Seminar was very focused, and very productive. Some aspects were lost, perhaps in particular representation in-person of the full breadth of applications communities.

It was very evident from the presentations and lively discussions at the Seminar that the field of "Tensor Computations" is vibrant, multi-faceted, interdisciplinary, and fundamental to progress in a diverse range of important areas which are driving researchers in different fields to search for common foundations and common tools.

One of the communities with an interest in tensor computations can be described as "classical" computational science, focusing, for example, on partial differential equations in fluid dynamics, and electronic structure computations in chemistry and materials science. Tensor contractions have been identified as a powerful way of representing the computational structure in the architecture of compilers for domain-specific languages serving these communities. Exploiting this algebraic intermediate representation in the compiler has enabled important performance optimizations far beyond the scope of conventional compilers based on loop nests and polyhedral techniques.

Another major community is primarily concerned with tensor decomposition - finding low-rank approximations of tensors. This is fundamental to data analytics and machine learning applications. Tensor factorization also provides a powerful framework for deep learning and representation learning, and provides a promising strategy for weight compression in convolutional neural networks.

Tensor contractions, in the form of tensor networks, have enormous importance as a tool for understanding and computation in particle and quantum physics. Indeed mapping the connections between these topics, as exposed through the structure of the tensor network representation, offers an exciting frontier with the potential to underpin these different disciplines with common language and shared software.

The Seminar developed a focus, to some extent as a result of the participants able to attend, on tensor contractions, recognising that this provides a foundation for implementation of numerical methods for tensor decompositions. Revisiting this is a key topic to be addressed in following up this Seminar in the future.

A major focus for progress was identified in characterization of safety and correctness properties - ensuring that tensor contraction expressions are well-formed and meaningful. A related topic that was identified as critical concerns how structure is captured, represented and used. This is not only conceptually valuable, but provides a pathway to exploiting block, band and symmetry structure in generating efficient code.

An open question remains in how to capture, track and exploit the properties of tensors with unstructured (i.e. data-dependent) sparsity.

A key outcome from the Seminar was to recognize the massive replication of efforts in terms of software development. Many tools and libraries are being re-developed within different communities, while failing to share techniques and experience in high-performance implementation. The aim of this Seminar was to address this lack of cohesive, coherent community effort to develop computational building blocks. The results of this effort are being realized in the form of a "white paper", offering a manifesto for how to bridge the discipline divides and realize the potential for tensor computations in the future.

Copyright Paolo Bientinesi, David Ham, Furong Huang, Paul H. J. Kelly, Christian Lengauer and Saday Sadayappan

Participants
  • Peter Braam (University of Oxford, GB) [dblp]
  • Charisee Chiw (Galois - Portland, US) [dblp]
  • Jeremy E. Cohen (CNRS - IRISA - Rennes, FR)
  • Anna Engels-Putzka (DLR - Köln, DE)
  • Jutho Haegeman (Ghent University, BE)
  • David Ham (Imperial College London, GB) [dblp]
  • Koby Hayashi (Georgia Institute of Technology - Atlanta, US)
  • Paul H. J. Kelly (Imperial College London, GB) [dblp]
  • Christian Lengauer (Köln, DE) [dblp]
  • Lawrence Mitchell (Durham University, GB) [dblp]
  • Norman Rink (TU Dresden, DE)
  • Volker Tresp (Siemens AG - München, DE) [dblp]
  • Richard M. Veras (Louisiana State Univ. - Baton Rouge, US) [dblp]
  • Frank Verstraete (Ghent University, BE) [dblp]
  • Sophia Vorderwuelbecke (Imperial College London, GB)

Related Seminars
  • Dagstuhl Seminar 22101: Tensor Computations: Applications and Optimization (2022-03-06 - 2022-03-11) (Details)

Classification
  • data structures / algorithms / complexity
  • programming languages / compiler

Keywords
  • Compilers
  • numerical methods
  • linear algebra
  • machine learning
  • computational science