https://www.dagstuhl.de/10441

### 31. Oktober – 05. November 2010, Dagstuhl-Seminar 10441

# Exact Complexity of NP-hard Problems

## Organisatoren

Thore Husfeldt (Lund University, SE)

Dieter Kratsch (University of Metz, FR)

Ramamohan Paturi (University of California – San Diego, US)

Gregory B. Sorkin (London School of Economics, GB)

## Auskunft zu diesem Dagstuhl-Seminar erteilt

## Dokumente

Teilnehmerliste

Dagstuhl's Impact: Dokumente verfügbar

## Summary

A decade before NP-completeness became the lens through which Computer Science views computationally hard problems, beautiful algorithms were discovered that are much better than exhaustive search, for example Bellman’s 1962 dynamic programming treatment of the Traveling Salesman problem and Ryser’s 1963 inclusion– exclusion formula for the permanent.

Today we know that all NP-hard problems are unlikely to admit polynomialtime algorithms, yet NP-hard problems must be solved, in some way, for everything from manufacturing and supply-chain optimization to railroad timetabling. Approaches include approximation algorithms, heuristics, average-case analysis, and exact exponential-time algorithms: all are essential. While all NP-complete problems are equivalent from the polynomial-time perspective, their exponential-time properties vary widely. Which problems are easiest or hardest? What are the promising algorithmic techniques? What are the connections with parametrized complexity? How fast an algorithm can we find? What about complexity lower bounds?

Work addressing such questions, both from the algorithmic and complexity theoretic sides, has become known as exact complexity. Despite significant progress, the area is still fairly new and many fundamental problems remain open. Where the approximation algorithms field, for example, has unifying algorithmic techniques such as LP rounding and semidefinite programming, and hardness techniques from probabilistically checkable proofs and the Unique Games conjectures, much exact algorithms work is still specific to a particular NP-complete problem: powerful unified techniques are just emerging.

Exciting new results and directions have been established by scientists on several continents, with important contributions coming from young researchers such as Williams and Traxler. The purpose of this seminar is to accelerate developments in this late-blooming field. Below, we outline several new results and promising directions.

The Tutte polynomial of an n-vertex, m-edged graph can trivially be evaluated in time O*(2m), but no vertex-parameterized algorithm is obvious. The Potts (q-coloring) partition function can trivially be evaluated in time O*(qn), but it is not obvious if one can remove the dependence on q. The Fortuin–Kasteleyn model from statistical physics generalizes both, and a breakthrough result of Björklund, Husfeldt, Kaski, and Koivisto [FOCS 2006, STOC 2007, FOCS 2008] shows how to evaluate it using the inclusion–exclusion method in time O*(2n). It is an intriguing question as to how far these techniques could be extended.

Recently, the color-coding technique of Alon, Yuster, and Zwick [JACM 1995] has been extended by introducing algebraic structures that yield faster fixed parameter tractable algorithms. Koutis [ICALP 2008] uses “vector coding” for a randomized O*(23k/2) algorithm for the k-Path problem, and Williams [IPL 2009] improves this to O*(2k). Such algorithms from group algebra are a promising direction for further exploration.

Branch-and-reduce is one of the most frequently used methods for solving NPhard problems, but current analyses of such algorithms may be overly pessimistic. Fomin, Grandoni and Kratsch [ICALP 2005, SODA 2006] used a measure and conquer framework to establish simple and fast algorithms to solve the Minimum Dominating Set and the Maximum Independent Set problem. This and related methods, Eppstein’s quasiconvex analysis [SODA 2004] and Scott and Sorkin’s linear programming method [Random 2005], have become indispensable, but a need remains for further improvements.

Faster algorithms, notably for Maximum Independent Set, have resulted from computer-produced graphical reductions and case analysis. Can these automated techniques be put on a more general theoretical level, and improved? Can similar automation be applied to logic-based branching rules such as the “clause learning” of Kulikov and Kutzkov [CSR 2007]? What about lower bounds on such local methods?

The meeting was attended by 46 researchers, the maximum possible modulo some last-minute cancellations. The organizers are grateful to all who came, and regret that --- due to a gratifyingly high acceptance rate --- others who would have contributed could not be invited. The participants came from around the globe, predominantly from Europe as usual for this field, but this time also with a good showing from the US.

## Dagstuhl-Seminar Series

- 13331: "Exponential Algorithms: Algorithms and Complexity Beyond Polynomial Time" (2013)
- 08431: "Moderately Exponential Time Algorithms" (2008)

## Classification

- Data Structures / Algorithms / Complexity

## Keywords

- Complexity
- Algorithms
- NP-hard Problems
- Exponential Time
- SAT
- Graphs