TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 26102

Tensor Factorizations Meet Probabilistic Circuits

( 01. Mar – 06. Mar, 2026 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/26102

Organisatoren
  • Grigorios Chrysos (University of Wisconsin - Madison, US)
  • Robert Peharz (TU Graz, AT)
  • Volker Tresp (LMU München, DE)
  • Antonio Vergari (University of Edinburgh, GB)

Kontakt

Motivation

A number of recent successes in sub-fields of AI and ML are due to exploiting structured low-rank representations. These can be attributed to mainly two research communities: one working on tensor factorizations (TFs) and the other on probabilistic circuits (PCs).

For the former, the use of low-rank tensor factorizations for scaling large language models (LLMs), where adapters, structured matrices, and diffusion of compact polynomial representations as powerful inductive biases are prominent examples. Furthermore, tensor networks are widely used to solve and accelerate physics-related problems and quantum computing. For the latter, PCs have emerged in the last decade to provide tractable probabilistic inference with guarantees, which is especially important in safety-critical applications, and a compositional way to automate probabilistic inference and reliable neuro-symbolic AI.

Techniques from both communities rely on the same core principles—structured computational graphs encoding low-rank tensors—which is key to scale computations to high dimensions as well as to provide closed-form solutions to many quantities of interest. Despite this common ground, each community has developed a different syntax, graphical representations, and jargon to describe very similar techniques, representations, and algorithms. This limits scientific advancements and interdisciplinary collaborations. For example, only recently an initial connection between tensor networks and circuits has been established, revealing that the two communities have been developing similar methods independently. We believe that connecting these different communities, and giving them a single stage to present and discuss in-depth their respective advancements, can further propel breakthroughs and foster cross-pollination in AI and ML. We believe that a week-long seminar for experts from these communities constitutes the perfect venue to exchange perspectives, deeply discuss the recent advancements, and build strong bridges that can greatly propel interdisciplinary research.

We will be emphasizing the theoretical connections between TFs and PCs as well as opportunities across both fields and provide space to members of the different communities to align their vocabularies and share ideas. Both hierarchical tensor factorizations and PCs have been introduced as alternative representations of probabilistic graphical models, and the connection between certain circuits and factorizations has been hinted in some works. However, they differ in how they are applied: TFs are usually used in tasks where a ground-truth tensor to approximate is available or a dimensionality reduction problem can be formulated (aka tensor sketch), whereas PCs are usually learned from data in the same spirit generative models are trained. Similar to TFs, however, modern PC representations are overparameterized and usually encoded as a collection of tensors as to leverage parallelism and modern deep learning frameworks. This begs the following questions, which we plan to answer during this Dagstuhl Seminar: (A) What are the formal connections between PCs and TFs? (B) How can we extend TFs (and PCs) to handle non-linear problems commonly encountered in ML and AI? (C) Under which properties are TFs and PCs provably sufficient to tractably compute queries of interest? (D) How far can we scale low-rank representations on modern software and hardware? (E) How can we harness recent advancements in TFs and PCs for reliable probabilistic reasoning e.g., in the field of neuro-symbolic AI?

Copyright Grigorios Chrysos, Robert Peharz, Volker Tresp, and Antonio Vergari

Klassifikation
  • Artificial Intelligence
  • Machine Learning

Schlagworte
  • tensor factorizations
  • probabilistic circuits
  • tractable models
  • low-rank representations