Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 01301

Inference Principles and Model Selection

( Jul 22 – Jul 27, 2001 )

Please use the following short url to reference this page:


The Dagstuhl Foundation gratefully acknowledges the donation from


Inference and induction denote the process of inferring an underlying dependence from empirical observations. They have been of interest to philosophy and scientific endeavour since the ancient times. To model this process, a number of statistical models have been developed. Examples thereof are the Bayesian approach of storing all plausible models and averaging them according to a posterior distribution, and the Occam's razor approach of searching for the simplest explanation of the observations, as implemented by MDL (minimum description length) and Vapnik-Chervonenkis-Theory. Despite the superficial differences, there exist common ideas which make it worthwhile to record a snapshot of where we are, where we want to go, and how we plan to achieve this.

At the same time, technological applications of induction in machine learning systems have extensively been explored algorithmically, highlighting the importance of issues which had typically not been the concern of philosophy. Practical problems of inference are concerned with noise in the data and with the issue of overfitting, i.e. extracting more structure from the data than is supported by it. Algorithms have to select a model from a large set of potential interpretations of the data. Model averaging, noise robustness, overfitting, capacity and other concepts play a central role in many of the theories.

The aims of the seminar revolve around deepening our understanding of the following set of questions

  • Can the different formalizations of inference be placed in a broader framework and perhaps seen as different views of a unified theory?
  • Do the recent developments shed new light on the question of induction as studied historically?
  • Are there notions of inference studied in philosophy that machine learning has overlooked?

The workshop focuses on the long-term perspective of Machine Learning and its impact on Computer Science, Statistics, Mathematics and Philosophy, rather than on the latest implementations or sophisticated technical details. Participants are encouraged to stimulate the discussion with a single slide that contains what they consider the crucial open problem, insight, or idea.

Focus Topics, Tutorials and Contributions:

Each half day will be devoted to one topic, starting with a tutorial. Attendees will then have the possibility to contribute in discussions, with short impromptu talks, or by presenting open problems. There will be the possibility of additional (demand-driven) sessions in the evenings. The following SESSIONS with a one hour tutorial have been planned for each half day:

7 Foundations of Inference

  • Bayesian Inference
  • Model Averaging and PAC-Bayesian inference
  • Statistical Mechanics Approaches
  • Structural Risk Minimization
  • Density Estimation
  • Online learning
  • Open inference problems in bioinformatics
  • Regularization theory
  • Reinforcement Learning

  • Shunichi Amari (RIKEN - Wako, JP)
  • Peter L. Bartlett (Biowulf Technologies - Berkeley, US) [dblp]
  • Stephane Boucheron (Université Paris Sud, FR) [dblp]
  • Olivier Bousquet (Ecole Polytechnique - Palaiseau, FR) [dblp]
  • Mikio Braun (Universität Bonn, DE)
  • Joachim M. Buhmann (ETH Zürich, CH) [dblp]
  • Stephane Canu (INSA - St- Etienne-du-Rouvray, FR)
  • Olivier Chapelle (Biowulf Technologies - Paris, FR)
  • A. Philip Dawid (University College London, GB) [dblp]
  • Joachim Denzler (Universität Jena, DE) [dblp]
  • André Elisseeff (Biowulf Technologies - New York, US)
  • Zoubin Ghahramani (University College London, GB) [dblp]
  • Yves Grandvalet (Technical University of Compiegne, FR)
  • Isabelle Guyon (ClopiNet - Berkeley, US) [dblp]
  • Ralf Herbrich (Microsoft Research UK - Cambridge, GB)
  • Matthias Hild (CalTech - Pasadena, US)
  • Günter Hotz (Universität des Saarlandes, DE) [dblp]
  • Jürgen Jost (MPI für Mathematik in den Naturwissenschaften, DE) [dblp]
  • Ron Meir (Technion - Haifa, IL)
  • Shahar Mendelson (Australian National University - Canberra, AU)
  • Wolfram Menzel (KIT - Karlsruher Institut für Technologie, DE)
  • Sebastian Mika (Fraunhofer Institut - Berlin, DE)
  • Sayan Mukherjee (MIT, US)
  • Klaus-Robert Müller (Fraunhofer Institut - Berlin, DE) [dblp]
  • Noboru Murata (Waseda University - Tokyo, JP)
  • Gunnar Rätsch (Fraunhofer Institut - Berlin, DE) [dblp]
  • Helge Ritter (Universität Bielefeld, DE) [dblp]
  • Volker Roth (Universität Bonn, DE) [dblp]
  • Jürgen Schmidhuber (IDSIA - Manno, CH) [dblp]
  • Bernhard Schölkopf (MPI für biologische Kybernetik - Tübingen, DE) [dblp]
  • Alexander J. Smola (Australian National University, AU) [dblp]
  • Naftali Tishby (The Hebrew University of Jerusalem, IL) [dblp]
  • Alexandre Tsybakov (UPMC - Paris, FR) [dblp]
  • Vladimir Vapnik (AT&T - Middletown, US) [dblp]
  • Vladimir Vovk (Royal Holloway University of London, GB)
  • Manfred Warmuth (University of California - Santa Cruz, US)
  • Chris J. Watkins (Royal Holloway University of London, GB)
  • Jason Weston (Biowulf Technologies - New York, US) [dblp]
  • Christopher Williams (University of Edinburgh, GB) [dblp]
  • Robert C. Williamson (Australian National University, AU) [dblp]
  • Hugo Zaragoza (Microsoft Research UK - Cambridge, GB)