16. – 21. Januar 2005, Dagstuhl Seminar 05031
Algorithms for Optimization with Incomplete Information
Auskunft zu diesem Dagstuhl Seminar erteilt
The purpose of this Seminar was to bring together top specialists working in algorithms for optimisation when the decision maker has only partial information. While problem descriptions in the different approaches to optimisation with incomplete information are quite similar, solution concepts and methods of solution may be quite different. Traditionally, the stochastic programming community has focussed on problems, where all uncertainty is due to the fact that concrete realizations are unknown, but the probability distributions from which they stem are fully known. The quality of the solution is typically measured in average case sense. In contrast, the online optimisation community assumes no particular probability model. Therefore the focus is traditionally on worst-case analysis. Recently, new developments made the gap between the two communities smaller. Robust optimisation replaces the assumption of a known probability distribution by an assumption about the range of possible values. Stochastic scheduling incorporates ideas of the competitiveness of algorithms with stochastic models for the demands. The typical assumption in stochastic programming that decisions do not influence the underlying probability distribution can usually no longer be maintained in stochastic scheduling.
To facilitate familiarizing of the different communities with each others ways of thinking, basic concepts, and basic research questions the seminar was started by four one-hour overview talks. These were delivered by Jiri Sgall (Online optimisation), Andrzej Ruszczynski (Stochastic Programming), Garud Iyengar (Robust Optimisation), and Marc Uetz (Stochastic Scheduling).
The regular program consisted of 38 thirty minutes talks, which could be classified into the following subgroups: Robust and minimax optimisation (Sim, Dupacova); Two- and multistage stochastic optimisation (Hochreiter, Dye, Sen, Stougie, Tomasgard); Assessing quality of solution (Morton, Rambau); Approximation (Higle, Swamy, van der Vlerk); Algorithmic approaches using game theory and nonlinear programming (Lorenz, Steinbach, Kleywegt, Bastin, Norkin); Applications in Communications and Robotics (Erlebach, Fekete, Epstein, Richter); Dynamic stochastic optimisation (Weiss, Philpott, Nino-Mora); Average case competitive analysis (Fujiwara, Vredefeld); Competitiveness Analysis (van Stee, Schäfer, Ebenlendr, Zhang, Skutella); Risk issues (Dentcheva, Eichhorn); Stochastic online scheduling (Megow, Schulz, Krumke); Probabilistic criteria (Henrion, Hoogeveen).
To assess the results of this seminar, on Thursday afternoon an open discussion was held about different views and perceptions on optimisation with incomplete information. The results of this discussion can be summarized as follows:
What the communities have in common is:
- The desire for optimality.
- The desire for more efficient algorithms, i.e. better/faster results.
- The fact that solutions, which require clairvoyance are not implementable.
- The necessity of comparing the non-clairvoyant solution to the ideal clairvoyant solution by either taking differences (value of perfect information) or ratios (competitive ratio).
- The distinction between individual solutions and solution rules (policies).
- The necessity of approximation.
- The interest in complexity issues.
What distinguishes the communities is:
- The way uncertainty is modelled (from sets of possible values via probability distributions to families of probability distributions).
- The frequency of decision making (once in a while versus online).
- The objective (to look for worst cases, average cases, include risks , chance constraints etc.).
- The class of problems (general as multistage LP, QP, MIP or specialised as scheduling, packing, sequencing).
- The way information is revealed (fixed observation times versus uncertainty about when and if ever information will be available).
- The view on risk.
Some participants brought up their individual views on the topic. It was felt that the advantage of probabilistic modelling lies in the sound concept of probability, developed over centuries, and the clear way of how to obtain and process information (samples). On the other hand, the assumption that a probability model is governing the data process is not always fulfilled, or information is so poor that range sets is all we have. Also, in long term models it is unrealistic to assume that probability distributions do not change over time. Adaptive algorithms in the broad sense are a way to circumvent this difficulty.
This inspired a discussion about bridges and possible collaboration between the communities. As already existing bridges were cited: Minimax and robust approaches, stochastic competitiveness analysis, certain stochastic dynamic models, complexity studies in stochastic optimisation. The need for more real world data and problems was expressed as well as the unanimous wish to study special problem classes which were presented at this seminar in more detail.