##### Dagstuhl-Seminar 13331

### Exponential Algorithms: Algorithms and Complexity Beyond Polynomial Time

##### ( 11. Aug – 16. Aug, 2013 )

##### Permalink

##### Organisatoren

- Thore Husfeldt (IT University of Copenhagen, DK)
- Ramamohan Paturi (University of California - San Diego, US)
- Gregory B. Sorkin (London School of Economics, GB)
- Ryan Williams (Stanford University, US)

##### Kontakt

- Annette Beyer (für administrative Fragen)

##### Programm

Computational complexity has demonstrated that thousands of important computational problems, spanning the sciences, are intimately linked: either they all have polynomial-time algorithms, or none does. Nearly all researchers believe that these problems do not all have low time complexity, formalised in the claim that the complexity classes P and NP are different. However, these problems must be solved, one way or another.

Approaches include approximation algorithms and average-case analysis; here we consider algorithms faster than exhaustive search but slower than polynomial time. We are interested in extending the techniques for designing such algorithms, and in developing a theory of computational complexity for this regime of computation.

The basic algorithmic techniques to avoid exhaustive search, now consolidated in the field's first textbook, are still being extended and refined. There is a steady pace of progress in polynomial-space dynamic programming algorithms, the fast zeta transform, measure-and-conquer methods, algebraic approaches, and algorithms for satisfiability.

The seminar is also concerned with the study of parameterized complexity. Although this field originates from the purely qualitative concept of fixed-parameter tractability, many recent results are concerned with the exponential time complexity's dependence on the parameter. This includes very active research on kernel sizes and various lower bounds.

A developing theory of exponential-time computation has revealed a finer-grained notion of problem complexity than NP-completeness. Of particular interest to the seminar are various exponential-time hypotheses (ETH, SETH, #ETH) and their relationship to circuit complexity, parameterized complexity, communication complexity, the Unique Games conjecture, data structures, and collapses of the polynomial hierarchy.

The immediate goal of this seminar is to bring together experts working in exponential-time algorithms,parameterized algorithms and complexity, and traditional branches of complexity theory, in a forum where they can communicate the latest and most significant developments in their respective areas, keeping in mind the broader goal of understanding the emerging relationships between the three areas. Additionally, the seminar will foster progress within each area, providing an environment for forming new collaborations and continuing old ones.

### Background

Computational complexity has demonstrated that thousands of important computational problems, spanning the sciences, are intimately linked: either they all have polynomial time algorithms, or none does. Nearly all researchers believe that P ≠ NP, and that these problems do not all have low time complexity. However, they must be solved, one way or another, which means relaxing the requirements for "solving" a problem. One natural requirement to relax is the running time. Problems are often solved in practice by algorithms with worst-case exponential time complexity. It is of interest to find the *fastest* algorithm for a given problem, be it polynomial, exponential, or something in between.

This relaxation has revealed a finer-grained notion of problem complexity than NP-completeness. By definition all NP-complete problems are equivalent as far as the existence of polynomial time algorithms is concerned. However, the exact time complexities of these problems can be very different, just as their best approximation ratios can vary.

**Algorithms for satisfiability** represent well the progress in the field and the questions that arise. The theory of NP-completeness says that the Circuit Sat problem and 3-Sat are polynomial time equivalent. However, from the exact, exponential time perspective, the two problems look radically different.

For 3-Sat (and k-Sat in general), algorithms faster than the exhaustive search of 2^n assignments have been known for many years and are continually improved. The analysis of the randomized PPSZ algorithm for 3-Sat has recently been improved to O(1.308^n), so currently the best known algorithm for this problem is also very simple. The best known deterministic algorithm runs in time O(1.331^n), and is obtained by derandomizing earlier local search algorithms. A very natural DPLL-type algorithm for Formula Sat in fact has good performance on linear size formulas. All of these results represent major conceptual contributions.

No such progress has been made for general Circuit Sat. In fact, such results would have major implications in circuit complexity: even a 1.99^n*poly(m)* time algorithm for satisfiability of circuits with n inputs and $m$ gates would imply exponential size *lower bounds* for solving problems with circuits. Between 3-Sat and Circuit Sat, there are also intermediate problems such as CNF-Sat that have resisted all efforts to produce an O(1.99^n) time algorithm.

**The basic algorithmic techniques** to avoid exhaustive search are now consolidated in the field's first textbook, (Fomin and Kratsch, *Exact Exponential Algorithms*, Springer 2010) though they are still being extended and refined. For example, there is now a general framework for making various exponential time dynamic programming algorithms, such as standard algorithms for Knapsack and Subset Sum, run in polynomial space. The fast zeta transform, which plays a central role in the implementation of inclusion-exclusion algorithms, continues to be actively researched. And "measure-and-conquer" methods for analyzing branching/backtracking algorithms continue to be enhanced.

However, many other powerful techniques have been explored only recently. One idea is to find combinatorial structures (such as matchings) by looking for corresponding algebraic objects (such as polynomials). The idea dates to Edmonds if not Tutte, but was introduced by Koutis for exponential time path and packing problems, leading to an 2^k*poly(n)* algorithm to find a k-path in a graph and a breakthrough O(1.67^n) time algorithm for finding a Hamiltonian path, improving the 50-year-old previously best algorithm.

Other open problems in the field have been attacked by intricate, dedicated analyses; for example, there is now an algorithm for scheduling partially ordered jobs in O((2-epsilon)^n) time.

**Parameterized complexity** is a closely related field that also investigates exponential time computation. Fundamentally, the field is interested in the dichotomy between algorithms that admit running times of the form f(k)*poly}(n)* (called fixed-parameter tractability) and those that do not, leading to qualitative hardness notions like W[1]-hardness. This field continues to make great progress, with the parameterized tractability of many fundamental problems just being discovered. Just recently the first fixed-parameter algorithms for finding topological minors and the multi-cut problem have been found.

However, many recent results in that area are interested in determining (typically exponential) growth rate of the function f, instead of just establishing its existence. For example, a recent, very successful focus of parameterized complexity is the existence of problem *kernels* of polynomial size, or their absence under assumptions from classical computational complexity. In another direction, very strong lower bounds for algorithms parameterized by treewidth can now be shown under hypotheses about the exponential time complexity of Sat.

**A quantitative theory** of computational complexity of hard problems would address questions like why it is that 3-Sat can be solved in 1.4^n but CNF-Sat apparently cannot be solve. Ideally, we could hope for a characterization of the exact complexity of NP-complete problems, perhaps under some plausible assumptions. There is a growing body of work on the exact complexity of NP-complete problems which draws heavily from parameterized complexity theory. The *Exponential Time Hypothesis* (ETH), which posits that 3-Sat cannot be solved in 2^{o(n)} time, has given a strong explanatory framework for why some classes of problems admit improved algorithms while others are resistant. The results surrounding ETH show that if 3-Sat could be solved in subexponential time, then many other NP problems would also have subexponential algorithms.

Another compelling conjecture is the *Strong Exponential Time Hypothesis* (SETH) that CNF Satisfiability cannot be solved in 1.999^n time on formulas with n variables and cn clauses (for sufficiently large c). SETH has implications for k-Sat, other graph problems, and parameterized computation. There is less consensus about the truth of SETH; nevertheless, studying its implications will help better understand what makes CNF so difficult. A counting version of the hypothesis, #ETH, has recently been introduced to study the exponential time complexity of counting problems, such as the permanent and the Tutte polynomial.

**Connections to other fields** are being discovered as well, such as the importance of exponential time algorithms to the study of lower bounds in circuit complexity, as mentioned above.

For another example, a celebrated recent result in the complexity of approximation algorithms exhibits an exp(O(n^varepsilon)) time approximation algorithm for Khot's Unique Games problem. This suggests that approximating unique games is a significantly easier task than solving NP-hard problems such as 3-Sat. The key to the algorithm is a new type of graph decomposition based on spectral methods. This decomposition method may well lead to more developments in exponential algorithms.

Furthermore, there are surprising connections between SETH and various other well-studied questions from other areas such as communication complexity and the 3-Sum hypothesis used in computational geometry and data structures. The instance compressibility notion introduced in the study of kernelisation turns out to be connected to the construction of hash functions.

These results show that increased attention to exponential time algorithms leads to progress of the highest caliber in well-established areas of the theory of computation.

- Per Austrin (KTH Royal Institute of Technology, SE) [dblp]
- Hans L. Bodlaender (Utrecht University, NL) [dblp]
- Marek Cygan (University of Warsaw, PL) [dblp]
- Holger Dell (University of Wisconsin - Madison, US) [dblp]
- Andrew Drucker (MIT - Cambridge, US) [dblp]
- Michael Elberfeld (ICSI - Berkeley, US) [dblp]
- Fedor V. Fomin (University of Bergen, NO) [dblp]
- Serge Gaspers (UNSW - Sydney, AU) [dblp]
- Pinar Heggernes (University of Bergen, NO) [dblp]
- Timon Hertli (ETH Zürich, CH) [dblp]
- Thore Husfeldt (IT University of Copenhagen, DK) [dblp]
- Kazuo Iwama (Kyoto University, JP) [dblp]
- Petteri Kaski (Aalto University, FI) [dblp]
- Eun Jung Kim (University Paris-Dauphine, FR) [dblp]
- Joachim Kneis (RWTH Aachen, DE) [dblp]
- Mikko Koivisto (University of Helsinki, FI) [dblp]
- Lukasz Kowalik (University of Warsaw, PL) [dblp]
- Dieter Kratsch (University of Metz, FR) [dblp]
- Stefan Kratsch (TU Berlin, DE) [dblp]
- Alexander S. Kulikov (Steklov Institute - St. Petersburg, RU) [dblp]
- Daniel Lokshtanov (University of Bergen, NO) [dblp]
- Shachar Lovett (University of California - San Diego, US) [dblp]
- Dániel Marx (Hungarian Academy of Sciences - Budapest, HU) [dblp]
- Matthias Mnich (Universität des Saarlandes, DE) [dblp]
- Jesper Nederlof (Utrecht University, NL) [dblp]
- Rolf Niedermeier (TU Berlin, DE) [dblp]
- Ramamohan Paturi (University of California - San Diego, US) [dblp]
- Peter Rossmanith (RWTH Aachen, DE) [dblp]
- Rahul Santhanam (University of Edinburgh, GB) [dblp]
- Saket Saurabh (The Institute of Mathematical Sciences, IN) [dblp]
- Dominik Alban Scheder (Aarhus University, DK) [dblp]
- Stefan Schneider (University of California - San Diego, US) [dblp]
- Kazuhisa Seto (The University of Electro-Communications - Tokyo, JP) [dblp]
- Gregory B. Sorkin (London School of Economics, GB) [dblp]
- Ewald Speckenmeyer (Universität Köln, DE) [dblp]
- David Steurer (Cornell University, US) [dblp]
- Stefan Szeider (TU Wien, AT) [dblp]
- Navid Talebanfard (Aarhus University, DK) [dblp]
- Suguru Tamaki (Kyoto University, JP) [dblp]
- Virginia Vassilevska Williams (Stanford University, US) [dblp]
- Magnus Wahlström (Royal Holloway University of London, GB) [dblp]
- Ryan Williams (Stanford University, US) [dblp]

##### Verwandte Seminare

##### Klassifikation

- data structures / algorithms / complexity

##### Schlagworte

- Exponential time
- exact algorithms
- satisfiability
- parameterized computation