Dagstuhl Seminar 23072
Challenges and Perspectives in Deep Generative Modeling
( Feb 12 – Feb 17, 2023 )
Permalink
Organizers
- Vincent Fortuin (University of Cambridge, GB)
- Yingzhen Li (Imperial College London, GB)
- Stephan Mandt (University of California - Irvine, US)
- Kevin Murphy (Google - Mountain View, US)
Contact
- Michael Gerke (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
Deep generative models, such as variational autoencoders, generative adversarial networks, normalizing flows, energy-based models, and diffusion probabilistic models, have attracted much research interest and promise to impact diverse areas such as chemistry, art, robotics, and compression. However, compared to supervised learning frameworks, their impact on real-world applications has remained limited. What can we do as a research community to promote their widespread adaptation in the industry and the sciences? We believe that promoting generative modeling in practical contexts is hindered by several currently overlooked challenges. In this Dagstuhl Seminar, we aim to assess the state of the art in deep generative modeling in its practical context. We hope to thereby highlight challenges that might otherwise be ignored by the research community and showcase potentially impactful directions for future research.
We believe that some important challenges include:
- Developing methods for assessing the quality of generated data
- Enhancing the scope of current models and architectures, to include domain knowledge, constraints, etc.
- Enhancing the scalability and speed of current methods of training, posterior inference, and generation
- Improving the reproducibility and/or interpretability of learned latent representations, e.g., to satisfy legal, fairness, or technological standards
To ground these theoretical challenges in practical contexts, this seminar will focus on the following application areas:
- Generative models for text, speech, images, and video.
- Generative modeling of scientific data. Specifically, we will consider applications in physics simulation, molecular synthesis, bioinformatics, and medicine. Challenges include incorporating scientific domain knowledge, specific data structures, and data sparsity.
- Neural data compression. While recent research has shown that neural video and image codecs show great potential to revolutionize current standards, many open problems remain, including out of distribution robustness, fast parallelism, evaluating perceptual quality, and standardization.
- Anomaly and distribution shift detection. As models that learn the data distribution, deep generative models should be useful in detecting outlier samples or changes in the data distribution. Unfortunately, generative models are still ill-suited for these tasks and are inferior in performance compared to, say, self-supervised methods.
By aiming to bring together researchers working on both applied and theoretical aspects of generative modeling across application domains, we hope to identify commonly occurring problems and general-purpose solutions. Beyond the traditional talks, the workshop will be accompanied by social and group activities to foster exchange among participants.

- Robert Bamler (Universität Tübingen, DE)
- Ryan Cotterell (ETH Zürich, CH) [dblp]
- Sina Däubener (Ruhr-Universität Bochum, DE)
- Gerard de Melo (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
- Sophie Fellenz (RPTU - Kaiserslautern, DE) [dblp]
- Asja Fischer (Ruhr-Universität Bochum, DE) [dblp]
- Vincent Fortuin (University of Cambridge, GB)
- Thomas Gärtner (Technische Universität Wien, AT) [dblp]
- Matthias Kirchler (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
- Marius Kloft (RPTU - Kaiserslautern, DE) [dblp]
- Yingzhen Li (Imperial College London, GB)
- Christoph Lippert (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
- Stephan Mandt (University of California - Irvine, US) [dblp]
- Laura Manduchi (ETH Zürich, CH)
- Eric Nalisnick (University of Amsterdam, NL)
- Björn Ommer (LMU München, DE)
- Rajesh Ranganath (NYU Courant Institute of Mathematical Science, US)
- Maja Rudolph (Bosch Center for AI - Pittsburgh, US) [dblp]
- Alexander Rush (Cornell University - Ithaca, US) [dblp]
- Lucas Theis (Google - London, GB)
- Karen Ullrich (Meta - New York, US)
- Jan-Willem van de Meent (University of Amsterdam, NL)
- Guy Van den Broeck (UCLA, US) [dblp]
- Julia Vogt (ETH Zürich, CH)
- Yixin Wang (University of Michigan - Ann Arbor, US)
- Florian Wenzel (Amazon Web Services - Tübingen, DE)
- Frank Wood (University of British Columbia - Vancouver, CA) [dblp]
Classification
- Artificial Intelligence
- Computer Vision and Pattern Recognition
- Machine Learning
Keywords
- deep generative models
- machine learning for science
- neural compression
- out-of-distribution detection