30. Juni – 05. Juli 2002, Dagstuhl Seminar 02271
Auskunft zu diesem Dagstuhl Seminar erteilt
In the past twenty years, researchers in Combinatorial Optimization and in Theoretical Computer Science have spent considerable effort on resolving the computational complexity status of various algorithmic problems with respect to finding an optimal solution. Most standard problems have been catalogued by now as either being NP-hard or as being solvable in polynomial time. Now, how should one proceed, if one encounters an NP-hard problem? One approach is to study the behaviour of so-called heuristics, "quick-and-dirty" algorithms which return feasible solutions that are not necessarily optimal. Such heuristics can be empirically valuable methods for attacking an NP-hard optimization problem, especially if the returned solutions are guaranteed to be not too far away from being optimum. Heuristics with some guaranteed worst case behaviour are called approximation algorithms.
Approximation algorithms may be further classified into off-line and into online algorithms. Off-line algorithms solve a problem with full knowledge of the complete problem data. Online algorithms construct partial solutions with partial knowledge of the problem data, and update their solutions everytime some new information is provided. In other words, they must handle a sequence of closely related and interleaved subproblems, satisfying each subproblem without knowledge of the future subproblems. Standard examples of online problems include scheduling the motion of elevators, finding routes in networks and allocating cache memory. Clearly, online algorithms are able to work in real-time enviroments, whereas off-line algorithms will run into difficulties if the input keeps changing.
The usual (somewhat unfair) way of measuring the quality of an online algorithm is to compare it to the optimal solution of the corresponding off-line problem where all information is available at the beginning. An online algorithm that always delivers results that are only a constant factor away from the corresponding optimum off-line solution, is called a competitive algorithm. Competitive algorithms were first investigated in 1969 in a paper by Graham dealing with online scheduling and in 1974 in the Ph.D. thesis of David Johnson dealing with online bin packing.
The Dagstuhl seminar on the design and analysis of competitive algorithms will communicate new results and continue the momentum in this rapidly developing area.
Related Dagstuhl Seminar
- 9626: "On-line Algorithms" (1996)