https://www.dagstuhl.de/22161
18. – 22. April 2022, Dagstuhl-Seminar 22161
Recent Advancements in Tractable Probabilistic Inference
Organisatoren
Priyank Jaini (Google – Toronto, CA)
Kristian Kersting (TU Darmstadt, DE)
Antonio Vergari (University of Edinburgh, GB)
Max Welling (University of Amsterdam, NL)
Auskunft zu diesem Dagstuhl-Seminar erteilen
Jutka Gasiorowski zu administrativen Fragen
Andreas Dolzmann zu wissenschaftlichen Fragen
Dokumente
Teilnehmerliste
Gemeinsame Dokumente
Programm des Dagstuhl-Seminars [pdf]
Motivation
AI and ML systems are being increasingly deployed in real-world scenarios — from healthcare, to finance, to policy making — to support human decision makers. As such, they are expected to reliably and flexibly support decisions by reasoning in the presence of uncertainty. Probabilistic inference provides a principled way to carry on this reasoning process over models that encode complex representations of the world as probability distributions. While we would like to have guarantees over the quality of the answers that these probabilistic models provide, we also expect them to be expressive enough to capture the intricate dependencies of the world they try to represent. Research on tractable probabilistic inference and modeling precisely investigates how a sensible trade-off between reliability and flexibility can be substantiated in these challenging scenarios.
Traditionally, research on representations and learning for tractable inference have embraced very different fields, each one contributing its own perspective. These include automated reasoning, probabilistic modeling, statistical and Bayesian inference and deep learning. More recent trends include the emerging fields of tractable neural density estimators such as autoregressive models and normalizing flows; probabilistic circuits such as sum-product networks and probabilistic sentential decision diagrams; and approximate inference routines with guarantees on the quality of the approximation.
The main goal of this Dagstuhl Seminar is to provide a common forum for researchers working in these seemingly “disparate” areas to discuss the recent advancement on reliable, efficient inference over expressive probabilistic models and discuss open problems such as
i) how can we design and learn expressive probabilistic models that guarantee tractable inference? How can we trade-off reliability and expressiveness in a principled way?
ii) how can probabilistic models robustly reason about the world and safely generalize over unknown states of the world?
iii) what challenges do practitioners of probabilistic modeling face in their applications and how can we democratize the use of reliable and efficient probabilistic inference?
iv) how can we effectively exploit the structure and symmetries in the world and in our models to efficiently perform inference or obtain reliable approximations?
We hope that the discussions around these topics can be turned into a vision document that not only summarizes the current state-of-the-art in these diverse fields, but also reconciles them to serve as an inspirational guide for a new generation of researchers that is just approaching the broader field of probabilistic AI and ML.
We thus aim to include participants from the many recent emerging fields of tractable neural density estimators such as autoregressive models and normalizing flows; deep tractable probabilistic circuits such as sum-product networks, probabilistic sentential decision diagrams and cutset networks; as well as approximate inference routines with guarantees on the quality of the approximation to offer diverse perspectives for the seminar.
Motivation text license Creative Commons BY 4.0
Priyank Jaini, Kristian Kersting, Antonio Vergari, and Max Welling
Classification
- Artificial Intelligence
- Machine Learning
Keywords
- Generative Models
- Deep Learning
- Probabilistic Models
- Graphical Models
- Tractable inference