10. – 15. Juni 2012, Dagstuhl Seminar 12241

Data Reduction and Problem Kernels


Michael R. Fellows (Charles Darwin University – Darwin, AU)
Jiong Guo (Universität des Saarlandes, DE)
Dániel Marx (Hungarian Academy of Sciences – Budapest, HU)
Saket Saurabh (University of Bergen, NO)

Auskunft zu diesem Dagstuhl Seminar erteilt

Dagstuhl Service Team


Dagstuhl Report, Volume 2, Issue 6 Dagstuhl Report
Gemeinsame Dokumente


Preprocessing (data reduction or kernelization) is used universally in almost every practical computer implementation that aims to deal with an NP-hard problem. The history of preprocessing, such as applying reduction rules to simplify truth functions, can be traced back to the origins of Computer Science the 1950's work of Quine, and much more. A modern example showing the striking power of efficient preprocessing is the commercial integer linear program solver CPLEX. The goal of a preprocessing subroutine is to solve efficiently the “easy parts'' of a problem instance and reduce it (shrinking it) to its computationally difficult “core'' structure (the problem kernel of the instance).

How can we measure the efficiency of such a kernelization subroutine? For a long time, the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomalous development of theoretical computer science, was that if we seek to start with an instance I of an NP-hard problem and try to find an efficient (P-time) subroutine to replace I with an equivalent instance I' with |I'|<|I| then success would imply P=NP --- discouraging efforts in this research direction, from a mathematically-powered point of view. The situation in regards the systematic, mathematically sophisticated investigation of preprocessing subroutines has changed drastically with advent of parameterized complexity, where the issues are naturally framed. More specifically, we ask for upper bounds on the reduced instance sizes as a function of a parameter of the input, assuming a polynomial time reduction/preprocessing algorithm.

A typical example is the famous Nemhauser-Trotter kernel for the Vertex Cover problem, showing that a “kernel" of at most 2k vertices can be obtained, with k the requested maximum size of a solution. A large number of results have been obtained in the past years, and the research in this area shows a rapid growth, not only in terms of number of papers appearing in top Theoretical Computer Science and Algorithms conferences and journals, but also in terms of techniques. Importantly, very recent developments were the introduction of new lower bound techniques, showing (under complexity theoretic assumptions) that certain problems must have kernels of at least certain sizes, meta-results that show that large classes of problems all have small (e.g., linear) kernels --- these include a large collection of problems on planar graphs and matroid based techniques to obtain randomized kernels.

Kernelization is a vibrant and rapidly developing area. The meeting on kernelization aims at consolidating the results achieved in the recent years, discuss future research directions, and explore further the applications potential of kernelization algorithms, and give excellent opportunities for the participants to engage in joint research and discussions on open problems and future directions.

The main highlights of the workshop were talks on the solution to two main open problems in the area of kernelization.

The seminar consisted of twenty two talks, a session on open questions, and informal discussions among the participants. The organizers selected the talks in order to have comprehensive lectures giving overview of main topics and communications of new research results. Each day consisted of talks and free time for informal gatherings among participants. On the fourth day of the seminar we celebrated the 60th birthday of Mike Fellows, one of the founder of parameterized complexity. On this day we had several talks on the origin, history and the current developments in the field of parameterized complexity.

Related Dagstuhl Seminar


  • Data Structures / Algorithms / Complexity


  • Preprocessing
  • Fixed-parameter tractability
  • Parameterized algorithmics


Bücher der Teilnehmer 

Buchausstellung im Erdgeschoss der Bibliothek

(nur in der Veranstaltungswoche).


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.