07.09.14 - 12.09.14, Seminar 14372

Analysis of Algorithms Beyond the Worst Case

Diese Seminarbeschreibung wurde vor dem Seminar auf unseren Webseiten veröffentlicht und bei der Einladung zum Seminar verwendet.


The theory of algorithms has traditionally focused on worst-case analysis. This focus has led to both a deep theory and many beautiful and useful algorithms. However, there are a number of important problems and algorithms for which worst-case analysis does not provide useful or empirically accurate results. For example, worst-case analysis suggests that the simplex method is an exponential-time algorithm for linear programming, while in fact it runs in near-linear time on almost all inputs of interest. This is due to the fact that worst-case inputs are often rather contrived and occur hardly ever in practical applications. It led to the situation that for many problems the classical theory is not able to classify algorithms meaningfully according to their performance.

Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. The development of a more realistic theory hinges on finding models that measure the performance of an algorithm not only by its worst-case behavior but rather by its behavior on "typical" inputs.

In this seminar, we will discuss various recent theoretical models and results that go beyond worst-case analysis. These models can be divided into two main categories: Either they assume some randomness in the input (for instance, probabilistic and smoothed analysis or semirandom input models). Or they consider only inputs that satisfy certain deterministic properties that mitigate the worst case (for instance, planted and stable solutions or fixed-parameter tractability).

The goal of this seminar is to help to consolidate the research and foster collaborations among the researchers working in the different branches of analysis of algorithms beyond the worst case.