Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 09181

Sampling-based Optimization in the Presence of Uncertainty

( Apr 26 – Apr 30, 2009 )

(Click in the middle of the image to enlarge)

Please use the following short url to reference this page:




There are numerous industrial optimization problems in manufacturing, transportation and logistics, security, energy modeling, finance and insurance, and the sciences where decisions have to be evaluated by a process that generates a noisy result. The process might be a discrete-event simulation, a Monte Carlo evaluation of a complex function, or a physical experiment (e.g., how many cancer cells were killed by a particular compound?). There might be a small number of discrete decisions (the location of an emergency response facility, the design of a compound, or a set of labor work rules), or a large vector of decision variables (the allocation of a fleet of vehicles, choosing a set of research projects or allocating assets among investments). There are applications in virtually any area of business, government, science and engineering. Algorithms to support decisions in these diverse environments are urgently needed. This Dagstuhl seminar focused primarily on problems where this measurement is expensive (for example, some computer models can take a day or more for a single data point), in which case the number of samples that could possibly be generated is rather limited. When the goal is to efficiently identify an optimal (or at least a very good) solution, the search for good solutions, and the collection of information to guide the search, are tightly coupled. It is necessary to strike a balance between collecting information (exploration or global search) and making decisions that appear to be the best given what we know (exploitation or local search). This is particularly true when measurements are expensive (long simulations, field experiments). Because of its wide-ranging applications, sampling-based optimization has been addressed by different communities with different methods, and from slightly different perspectives. Currently, communities are largely tied to problem categories (e.g., finite vs. infinite number of alternatives; discrete vs. continuous decision variables; desired statement at termination). This Dagstuhl seminar brought together researchers from statistical ranking and selection; experimental design and response-surface modeling; stochastic programming; approximate dynamic programming; optimal learning; and the design and analysis of computer experiments with the goal of attaining a much better mutual understanding of the commonalities and differences of the various approaches to sampling-based optimization, and to take first steps toward an overarching theory, encompassing many of the topics above.

Overall, the seminar was a great success and offered many possibilities for cooperation. It was generally agreed that such a workshop should be repeated in two years time.

  • Thomas Bartz-Beielstein (FH Köln, DE) [dblp]
  • Peter A. N. Bosman (CWI - Amsterdam, NL) [dblp]
  • Jürgen Branke (University of Warwick, GB) [dblp]
  • Xi-Ren Cao (HKUST - Kowloon, HK)
  • Chun-Hung Chen (George Mason Univ. - Fairfax, US)
  • Stephen E. Chick (INSEAD - Fontainebleau, FR)
  • Marc Deisenroth (University of Cambridge, GB) [dblp]
  • Alexander Forrester (University of Southampton, GB)
  • Peter Frazier (Princeton University, US) [dblp]
  • Michael Fu (University of Maryland - College Park, US)
  • Peter W. Glynn (Stanford University, US)
  • Genetha Gray (Sandia Nat. Labs - Livermore, US)
  • Liu (Jeff) Hong (The Hong Kong Univ. of Science & Technology, HK)
  • Paul B. Kantor (Rutgers University - New Brunswick, US)
  • Jack P. C. Kleijnen (Tilburg University, NL)
  • Joachim Kunert (TU Dortmund, DE)
  • Pierre L'Ecuyer (University of Montréal, CA) [dblp]
  • Shie Mannor (Technion - Haifa, IL) [dblp]
  • Stephan Meisel (TU Braunschweig, DE) [dblp]
  • David P. Morton (University of Texas - Austin, US)
  • Barry L. Nelson (Northwestern University - Evanston, US) [dblp]
  • Arnold Neumaier (Universität Wien, AT) [dblp]
  • Marek Petrik (University of Massachusetts - Amherst, US)
  • Juta Pichitlamken (Kasetsart University - Bangkok, TH)
  • Warren Buckler Powell (Princeton University, US) [dblp]
  • Günter Rudolph (TU Dortmund, DE) [dblp]
  • Thomas J. Santner (Ohio State University, US)
  • Matthias Seeger (Universität des Saarlandes, DE)
  • Anna Syberfeldt (University of Skövde, SE)
  • Matt Taddy (University of Chicago, US)
  • W.C.M. (Wim) van Beers (Tilburg University, NL)

  • Artificial Intelligence
  • Simulation
  • Algorithms
  • Optimization
  • Soft Computing

  • Optimal learning
  • optimization in the presence of uncertainty
  • sequential experimental design
  • ranking and selection
  • random