TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 14372

Analysis of Algorithms Beyond the Worst Case

( Sep 07 – Sep 12, 2014 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/14372

Organizers

Contact


Motivation

The theory of algorithms has traditionally focused on worst-case analysis. This focus has led to both a deep theory and many beautiful and useful algorithms. However, there are a number of important problems and algorithms for which worst-case analysis does not provide useful or empirically accurate results. For example, worst-case analysis suggests that the simplex method is an exponential-time algorithm for linear programming, while in fact it runs in near-linear time on almost all inputs of interest. This is due to the fact that worst-case inputs are often rather contrived and occur hardly ever in practical applications. It led to the situation that for many problems the classical theory is not able to classify algorithms meaningfully according to their performance.

Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. The development of a more realistic theory hinges on finding models that measure the performance of an algorithm not only by its worst-case behavior but rather by its behavior on "typical" inputs.

In this seminar, we will discuss various recent theoretical models and results that go beyond worst-case analysis. These models can be divided into two main categories: Either they assume some randomness in the input (for instance, probabilistic and smoothed analysis or semirandom input models). Or they consider only inputs that satisfy certain deterministic properties that mitigate the worst case (for instance, planted and stable solutions or fixed-parameter tractability).

The goal of this seminar is to help to consolidate the research and foster collaborations among the researchers working in the different branches of analysis of algorithms beyond the worst case.


Summary

The theory of algorithms has traditionally focused on worst-case analysis. This focus has led to both a deep theory and many beautiful and useful algorithms. There are, however, a number of important problems and algorithms for which worst-case analysis does not provide useful or empirically accurate results. For example, worst-case analysis suggests that the simplex method is an exponential-time algorithm for linear programming, while in fact it runs in near-linear time on almost all inputs of interest. Worst-case analysis ranks all deterministic caching algorithms equally, while in almost all applications some algorithms (like least-recently-used) are consistently superior to others (like first-in-first-out).

The problem is that worst-case analysis does not take into consideration that worst-case inputs are often rather contrived and occur hardly ever in practical applications. It led to the situation that for many problems the classical theory is not able to classify algorithms meaningfully according to their performance. Even worse, for some important problems it recommends algorithms that perform badly in practice over algorithms that work well in practice only because the artificial worst-case performance of the latter ones is bad.

Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. The development of a more realistic theory hinges on finding models that measure the performance of an algorithm not only by its worst-case behavior but rather by its behavior on typical inputs. However, for an optimization problem at hand it is usually impossible to rigorously define the notion of ``typical input'' because what such an input looks like depends on the concrete application and on other indistinct parameters. The key to building a rigorous and realistic theory is hence not to define exactly what a typical input looks like, but to identify common properties that are shared by real-world inputs. As soon as such properties are identified, in many cases one can explain why certain heuristics work well in practice while others do not. The next step is then to look for algorithmic means to exploit these properties explicitly in order to obtain improved algorithms in practice that are not tailored to unrealistic worst-case inputs.

Many different models that go beyond classical worst-case theory have been suggested. These models can be divided into two main categories: either they are based on the assumption that inputs are to some extend random (probabilistic analysis) or they consider only inputs that satisfy certain deterministic properties that mitigate the worst case.

Probabilistic Analysis

Average-case analysis is probably the first thought that springs to mind when mentioning probabilistic input models. In such an analysis, one considers the expected performance on random inputs. Starting in the seventies, many algorithms that showed a remarkable performance in practice have been analyzed successfully on random inputs. This includes algorithms for classical optimization problems, such as the traveling salesman problem, or the simplex method for linear programming.

While average-case analysis has been successfully applied to many problems, a major concern is that inputs chosen completely at random have for many problems little in common with inputs arising in practice. Similar as a random TV screen produced by static noise has nothing to do with a typical TV screen, a set of random points does not resemble, for instance, a realistic clustering instance with mostly well-separated clusters.

Smoothed Analysis. To overcome the drawbacks of average-case and worst-case analysis, the notion of smoothed analysis has been suggested by Spielman and Teng in 2001. In this model, inputs are generated in two steps: first, an adversary chooses an arbitrary instance, and then this instance is slightly perturbed at random. The smoothed performance of an algorithm is defined to be the worst expected performance the adversary can achieve. This model can be viewed as a less pessimistic worst-case analysis, in which the randomness rules out pathological worst-case instances that are rarely observed in practice but dominate the worst-case analysis. If the smoothed running time of an algorithm is low (i.e., the algorithm is efficient in expectation on any perturbed instance) and inputs are subject to a small amount of random noise, then it is unlikely to encounter an instance on which the algorithm performs poorly. In practice, random noise can stem from measurement errors, numerical imprecision or rounding errors. It can also model arbitrary influences, which we cannot quantify exactly, but for which there is also no reason to believe that they are adversarial. The framework of smoothed analysis was originally invented to explain the practical success of the simplex method for linear programming. Spielman and Teng analyzed linear programs in which each coefficient is perturbed by Gaussian noise with standard deviation sigma. They showed that the smoothed running time of the simplex method is bounded polynomially in the input size and 1/sigma. Hence, even if the amount of randomness is small, the expected running time of the simplex method is polynomially bounded. After its invention smoothed analysis has attracted a great deal of attention and it has been applied in a variety of different contexts, e.g., in multi-objective optimization, local search, clustering, and online algorithms. By now smoothed analysis is widely accepted as a realistic alternative to worst-case analysis.

Semi-random Models. Semi-random input models can be considered as analogues of smoothed analysis for graph problems and they even predate smoothed analysis by a couple of years. There is a variety of semi-random graph models that go beyond the classical Erdös-Rényi random graphs. In most of these models graphs are generated by a noisy adversary -- an adversary whose decisions (whether or not to insert a particular edge) have some small probability of being reversed. Another well-studied class of semi-random models are planted models, in which a solution (e.g., an independent set or a partitioning of the vertices in color classes) is chosen and then edges are added randomly or by an adversary of limited power in such a way that the given solution stays a valid solution for the given problem. Similar to smoothed analysis, semi-random models have been invented in order to better understand the complexity of NP-hard graph problems because Erdös-Rényi random graphs often do not reflect the instances one encounters in practice -- many graph problems are quite easy to solve on such random graphs.

Deterministic Input Models

Smoothed analysis and semi-random models are multi-purpose frameworks that do not require much information about how exactly typical inputs for the optimization problem at hand look like. If more information is available, it makes sense to identify structural properties of typical inputs that make them easier to solve than general inputs. There are well known examples of this approach like the TSP, which gets easier (in terms of approximation) when restricted to inputs in which the distances satisfy the triangle inequality. Also in computational geometry it is a very common phenomenon that problems become easier if one assumes that no angles are too small or not too many objects overlap in the same region.

In recent years there has been an increased interest in more sophisticated deterministic input models, in particular for clustering problems. Balcan, Blum, and Gupta introduce and exploit the so-called (1+alpha, varepsilon)-approximation-stability property of data in the context of clustering. This assumption is motivated by the observation that in many clustering applications there is usually one correct but unknown target clustering, and the goal is to find a clustering that is close to this target clustering and misclassifies only a few objects. On the other hand in the common mathematical formulation of clustering problems a potential function is defined that assigns a value to each clustering. Then a clustering is computed that approximately optimizes the potential function (exact optimization is usually NP-hard). This approach makes sense only if clusterings that approximately optimize the potential function are close to the target clustering. Hence, an implicit assumption underlying this approach is that every clustering that approximately optimizes the objective function is close to the desired target clustering. Balcan et al. made this assumption explicit: they define that a clustering instance satisfies the (1+alpha,varepsilon)-approximation-stability assumption if in every c-approximation of the potential function at most an varepsilon-fraction of all objects is misclassified compared to the target clustering. Balcan et al. showed that clustering instances with this property are easier to solve than general instances. They have shown specifically how to get varepsilon-close to the target even for values of alpha for which finding a 1+ alpha approximation is NP-hard. Voevodoski et al. have shown that this approach leads to very efficient and accurate algorithms (with improved performance over previous state-of-the-art algorithms) for clustering biological datasets.

Bilu and Linial and later Awasthi, Blum, and Sheffet have considered instances of clustering problems that are perturbation resilient in the sense that small perturbations of the metric space do not change the optimal solution. They argue that interesting instances of clustering problems are stable and they prove that the assumption of stability renders clustering polynomial-time solvable. Balcan and Liang further relaxed this assumption to require only that the optimal solution after the perturbations is close to the optimal solution for the unperturbed instance.

These results have triggered a significant amount of work in the past years in the context of clustering and machine learning problems more generally, including subsequent works that proposed new related stability conditions (e.g., the "proximity condition" by Kannan and Kumar). Such works are very good examples demonstrating that identifying properties of real-world inputs can be extremely beneficial for our understanding of algorithmic problems.

Program of the Seminar

The program of the seminar consisted of 23 talks, including the following survey talks:

  • Preprocessing of NP-hard problems, Uriel Feige;
  • Approximation-stability and Perturbation-stability, Avrim Blum;
  • Computational Feasibility of Clustering under Clusterability Assumptions, Shai Ben-David;
  • Parametrizing the easiness of machine learning problems, Sanjoy Dasgupta;
  • Linear Algebra++: Adventures and Unsupervised Learning, Sanjeev Arora.

The rest of the talks were 30-minute presentations on recent research of the participants. The time between lunch and the afternoon coffee break was mostly left open for individual discussions and collaborations in small groups. One open-problem session was organized.

One of the main goals of the seminar was to foster collaborations among the researchers working in the different branches of analysis of algorithms as sketched above. This is particularly important because at the moment the two communities dealing with probabilistic analysis and deterministic input models are largely disjoint. The feedback provided by the participants shows that the goals of the seminar, namely to circulate new ideas and create new collaborations, were met to a large extent.

The organizers and participants wish to thank the staff and the management of Schloss Dagstuhl for their assistance and support in the arrangement of a very successful and productive event.

Copyright Maria-Florina Balcan, Bodo Manthey, Heiko Röglin, and Tim Roughgarden

Participants
  • Spyros Angelopoulos (UPMC - Paris, FR) [dblp]
  • Sanjeev Arora (Princeton University, US) [dblp]
  • Maria-Florina Balcan (Carnegie Mellon University, US) [dblp]
  • Shai Ben-David (University of Waterloo, CA) [dblp]
  • Markus Bläser (Universität des Saarlandes, DE) [dblp]
  • Avrim Blum (Carnegie Mellon University, US) [dblp]
  • Karl Bringmann (ETH Zürich, CH) [dblp]
  • Kamiel Cornelissen (University of Twente, NL) [dblp]
  • Radu Curticapean (Universität des Saarlandes, DE) [dblp]
  • Sanjoy Dasgupta (University of California - San Diego, US) [dblp]
  • Matthias Englert (University of Warwick - Coventry, GB) [dblp]
  • Michael Etscheid (Universität Bonn, DE) [dblp]
  • Uriel Feige (Weizmann Institute - Rehovot, IL) [dblp]
  • Tobias Friedrich (Universität Jena, DE) [dblp]
  • Anupam Gupta (Carnegie Mellon University, US) [dblp]
  • Marvin Künnemann (MPI für Informatik - Saarbrücken, DE) [dblp]
  • Tengyu Ma (Princeton University, US) [dblp]
  • Konstantin Makarychev (Microsoft Corporation - Redmond, US) [dblp]
  • Yury Makarychev (TTIC - Chicago, US) [dblp]
  • Bodo Manthey (University of Twente, NL) [dblp]
  • Matthias Mnich (Universität Bonn, DE) [dblp]
  • Heiko Röglin (Universität Bonn, DE) [dblp]
  • Tim Roughgarden (Stanford University, US) [dblp]
  • Or Sheffet (Harvard University - Cambridge, US) [dblp]
  • Christian Sohler (TU Dortmund, DE) [dblp]
  • Ruth Urner (Carnegie Mellon University, US) [dblp]
  • Sergei Vassilvitskii (Google Inc. - Mountain View, US) [dblp]

Classification
  • data structures / algorithms / complexity
  • optimization / scheduling

Keywords
  • analysis of algorithms
  • probabilistic analysis
  • smoothed analysis
  • clustering
  • realistic input models
  • robustness