TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23072

Challenges and Perspectives in Deep Generative Modeling

( Feb 12 – Feb 17, 2023 )


Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23072

Organizers

Contact

Shared Documents

Schedule

Motivation

Deep generative models, such as variational autoencoders, generative adversarial networks, normalizing flows, energy-based models, and diffusion probabilistic models, have attracted much research interest and promise to impact diverse areas such as chemistry, art, robotics, and compression. However, compared to supervised learning frameworks, their impact on real-world applications has remained limited. What can we do as a research community to promote their widespread adaptation in the industry and the sciences? We believe that promoting generative modeling in practical contexts is hindered by several currently overlooked challenges. In this Dagstuhl Seminar, we aim to assess the state of the art in deep generative modeling in its practical context. We hope to thereby highlight challenges that might otherwise be ignored by the research community and showcase potentially impactful directions for future research.

We believe that some important challenges include:

  • Developing methods for assessing the quality of generated data
  • Enhancing the scope of current models and architectures, to include domain knowledge, constraints, etc.
  • Enhancing the scalability and speed of current methods of training, posterior inference, and generation
  • Improving the reproducibility and/or interpretability of learned latent representations, e.g., to satisfy legal, fairness, or technological standards

To ground these theoretical challenges in practical contexts, this seminar will focus on the following application areas:

  • Generative models for text, speech, images, and video.
  • Generative modeling of scientific data. Specifically, we will consider applications in physics simulation, molecular synthesis, bioinformatics, and medicine. Challenges include incorporating scientific domain knowledge, specific data structures, and data sparsity.
  • Neural data compression. While recent research has shown that neural video and image codecs show great potential to revolutionize current standards, many open problems remain, including out of distribution robustness, fast parallelism, evaluating perceptual quality, and standardization.
  • Anomaly and distribution shift detection. As models that learn the data distribution, deep generative models should be useful in detecting outlier samples or changes in the data distribution. Unfortunately, generative models are still ill-suited for these tasks and are inferior in performance compared to, say, self-supervised methods.

By aiming to bring together researchers working on both applied and theoretical aspects of generative modeling across application domains, we hope to identify commonly occurring problems and general-purpose solutions. Beyond the traditional talks, the workshop will be accompanied by social and group activities to foster exchange among participants.

Copyright Vincent Fortuin, Yingzhen Li, Stephan Mandt, and Kevin Murphy

Participants

Classification
  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Machine Learning

Keywords
  • deep generative models
  • machine learning for science
  • neural compression
  • out-of-distribution detection