http://www.dagstuhl.de/12241

June 10 – 15 , 2012, Dagstuhl Seminar 12241

Data Reduction and Problem Kernels

Organizers

Michael R. Fellows (Charles Darwin University – Darwin, AU)
Jiong Guo (Universität des Saarlandes, DE)
Dániel Marx (Hungarian Academy of Sciences – Budapest, HU)
Saket Saurabh (University of Bergen, NO)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 2, Issue 6 Dagstuhl Report
List of Participants
Shared Documents

Summary

Preprocessing (data reduction or kernelization) is used universally in almost every practical computer implementation that aims to deal with an NP-hard problem. The history of preprocessing, such as applying reduction rules to simplify truth functions, can be traced back to the origins of Computer Science the 1950's work of Quine, and much more. A modern example showing the striking power of efficient preprocessing is the commercial integer linear program solver CPLEX. The goal of a preprocessing subroutine is to solve efficiently the “easy parts'' of a problem instance and reduce it (shrinking it) to its computationally difficult “core'' structure (the problem kernel of the instance).

How can we measure the efficiency of such a kernelization subroutine? For a long time, the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomalous development of theoretical computer science, was that if we seek to start with an instance I of an NP-hard problem and try to find an efficient (P-time) subroutine to replace I with an equivalent instance I' with |I'|<|I| then success would imply P=NP --- discouraging efforts in this research direction, from a mathematically-powered point of view. The situation in regards the systematic, mathematically sophisticated investigation of preprocessing subroutines has changed drastically with advent of parameterized complexity, where the issues are naturally framed. More specifically, we ask for upper bounds on the reduced instance sizes as a function of a parameter of the input, assuming a polynomial time reduction/preprocessing algorithm.

A typical example is the famous Nemhauser-Trotter kernel for the Vertex Cover problem, showing that a “kernel" of at most 2k vertices can be obtained, with k the requested maximum size of a solution. A large number of results have been obtained in the past years, and the research in this area shows a rapid growth, not only in terms of number of papers appearing in top Theoretical Computer Science and Algorithms conferences and journals, but also in terms of techniques. Importantly, very recent developments were the introduction of new lower bound techniques, showing (under complexity theoretic assumptions) that certain problems must have kernels of at least certain sizes, meta-results that show that large classes of problems all have small (e.g., linear) kernels --- these include a large collection of problems on planar graphs and matroid based techniques to obtain randomized kernels.

Kernelization is a vibrant and rapidly developing area. The meeting on kernelization aims at consolidating the results achieved in the recent years, discuss future research directions, and explore further the applications potential of kernelization algorithms, and give excellent opportunities for the participants to engage in joint research and discussions on open problems and future directions.

The main highlights of the workshop were talks on the solution to two main open problems in the area of kernelization.

The seminar consisted of twenty two talks, a session on open questions, and informal discussions among the participants. The organizers selected the talks in order to have comprehensive lectures giving overview of main topics and communications of new research results. Each day consisted of talks and free time for informal gatherings among participants. On the fourth day of the seminar we celebrated the 60th birthday of Mike Fellows, one of the founder of parameterized complexity. On this day we had several talks on the origin, history and the current developments in the field of parameterized complexity.

Related Dagstuhl Seminar

Classification

  • Data Structures / Algorithms / Complexity

Keywords

  • Preprocessing
  • Fixed-parameter tractability
  • Parameterized algorithmics

Book exhibition

Books from the participants of the current Seminar 

Book exhibition in the library, ground floor, during the seminar week.

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support