Despite the recent success of large-scale deep learning, these systems still fall short in terms of their reliability and trustworthiness. They often lack the ability to estimate their own uncertainty in a calibrated way, encode meaningful prior knowledge, avoid catastrophic failures, and also reason about their environments to avoid such failures. Since its inception, Bayesian deep learning (BDL) has harbored the promise of achieving these desiderata by combining the solid statistical foundations of Bayesian inference with the practically successful engineering solutions of deep learning methods. This was intended to provide a principled mechanism to add the benefits of Bayesian learning to the framework of deep neural networks.
However, compared to its promise, BDL methods often do not live up to the expectation and underdeliver in terms of real-world impact. This is due to many fundamental challenges related to, for instance, computation of approximate posteriors, unavailability of flexible priors, but also lack of appropriate testbeds and benchmarks. To make things worse, there are also numerous misconceptions about the scope of Bayesian methods, and researchers often end up expecting more than what they can get out of Bayes. They can also ignore other simpler and cheaper non-Bayesian alternatives such as the bootstrap method, post-hoc uncertainty scaling, and conformal prediction. Such overexpectation followed by an underdelivery can lead researchers to lose faith in the Bayesian ways, something we ourselves have witnessed in the past.
So, what exactly is the role of Bayes in this modern day and age of AI where many of the original promises of Bayes are being (or at least seem to be) unlocked simply by scaling? Non-Bayesian approaches appear to solve many problems that Bayesians once dreamt of solving using Bayesian methods. We thus believe that it is timely and important to rethink and redefine the promises and challenges of Bayesian approaches; and also to elucidate which Bayesian methods might prevail against their non-Bayesian competitors; and finally identify key application areas where Bayes can shine.
By bringing together researchers from diverse communities, such as machine learning, statistics, and deep learning practice, in a personal and interactive seminar environment featuring debates, round tables, and brainstorming sessions, we hope to discuss and answer these questions from a variety of angles and chart a path for future research to innovate, enhance, and strengthen meaningful real-world impact of Bayesian deep learning.
- Artificial Intelligence
- Machine Learning
- Bayesian machine learning
- Deep learning
- Foundation models
- Uncertainty estimation
- Model selection