Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Innerhalb dieser Seite:
Externe Seiten:
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp

Dagstuhl-Seminar 23072

Challenges and Perspectives in Deep Generative Modeling

( 12. Feb – 17. Feb, 2023 )

Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite:



Gemeinsame Dokumente



Deep generative models, such as variational autoencoders, generative adversarial networks, normalizing flows, energy-based models, and diffusion probabilistic models, have attracted much research interest and promise to impact diverse areas such as chemistry, art, robotics, and compression. However, compared to supervised learning frameworks, their impact on real-world applications has remained limited. What can we do as a research community to promote their widespread adaptation in the industry and the sciences? We believe that promoting generative modeling in practical contexts is hindered by several currently overlooked challenges. In this Dagstuhl Seminar, we aim to assess the state of the art in deep generative modeling in its practical context. We hope to thereby highlight challenges that might otherwise be ignored by the research community and showcase potentially impactful directions for future research.

We believe that some important challenges include:

  • Developing methods for assessing the quality of generated data
  • Enhancing the scope of current models and architectures, to include domain knowledge, constraints, etc.
  • Enhancing the scalability and speed of current methods of training, posterior inference, and generation
  • Improving the reproducibility and/or interpretability of learned latent representations, e.g., to satisfy legal, fairness, or technological standards

To ground these theoretical challenges in practical contexts, this seminar will focus on the following application areas:

  • Generative models for text, speech, images, and video.
  • Generative modeling of scientific data. Specifically, we will consider applications in physics simulation, molecular synthesis, bioinformatics, and medicine. Challenges include incorporating scientific domain knowledge, specific data structures, and data sparsity.
  • Neural data compression. While recent research has shown that neural video and image codecs show great potential to revolutionize current standards, many open problems remain, including out of distribution robustness, fast parallelism, evaluating perceptual quality, and standardization.
  • Anomaly and distribution shift detection. As models that learn the data distribution, deep generative models should be useful in detecting outlier samples or changes in the data distribution. Unfortunately, generative models are still ill-suited for these tasks and are inferior in performance compared to, say, self-supervised methods.

By aiming to bring together researchers working on both applied and theoretical aspects of generative modeling across application domains, we hope to identify commonly occurring problems and general-purpose solutions. Beyond the traditional talks, the workshop will be accompanied by social and group activities to foster exchange among participants.

Copyright Vincent Fortuin, Yingzhen Li, Stephan Mandt, and Kevin Murphy


  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Machine Learning

  • deep generative models
  • machine learning for science
  • neural compression
  • out-of-distribution detection