https://www.dagstuhl.de/14121

### 16. – 21. März 2014, Dagstuhl-Seminar 14121

# Computational Complexity of Discrete Problems

## Organisatoren

Anna Gál (University of Texas – Austin, US)

Michal Koucký (Charles University – Prague, CZ)

Oded Regev (New York University, US)

Rüdiger Reischuk (Universität Lübeck, DE)

## Auskunft zu diesem Dagstuhl-Seminar erteilt

## Dokumente

Dagstuhl Report, Volume 4, Issue 3

Motivationstext

Teilnehmerliste

Dagstuhl's Impact: Dokumente verfügbar

Programm des Dagstuhl-Seminars [pdf]

## Summary

### Introduction and goals

Computational complexity aims to answer what is efficiently solvable and what is not on various computational models. This means providing upper and lower bounds on the necessary resources such as time, space or communication, and establishing connections among the different models.

There are intricate connections between complexity measures in different computational models. For instance, circuit size is closely related to computation time, whereas circuit depth and branching program size are closely related to computation space. Breaking the current barriers of our understanding in any of these models would have major consequences in several of the other models as well.

Investigating the connections between the various computational models and subareas of computational complexity has already led to many exciting results. In recent years several novel techniques have been introduced in computational complexity, resulting in a number of breakthroughs, some of which are still actively investigated. In particular, information-theoretic techniques have led to tremendous progress in our understanding of communication complexity, such as new methods to compress interactive communication, very efficient ways to immunize protocols against corruption of the communication by an adversary, and a better understanding of so-called direct product questions. This progress in turn led to progress in our understanding of the streaming model, in which one needs to process massive amounts of received data without being able to store it. Semi-definite programming, a technique originally used in optimization and in the design of approximation algorithms, has led to a tight and very elegant characterization of quantum query complexity. In the area of hardness of approximation, new approaches to prove the Unique Game Conjecture (which is one of the most central open questions in the area) have been suggested. Finally, a recent breakthrough separation of the class NEXP (nondeterministic exponential time) from the class ACC$^0$ (bounded depth circuits with counting) rests on a new technique that derives a lower bound for a non-uniform model from an upper bound on satisfiability in the uniform setting; this technique opens up a new range of possible connections between uniform and non-uniform models.

The seminar "Computational Complexity of Discrete Problems" has evolved out of the series of seminars entitled "Complexity of Boolean Functions," a topic that has been covered at Dagstuhl on a regular basis since the foundation of this research center. A salient feature of the current research in computational complexity is the integration of ideas from different subareas of computational complexity and from other fields in computer science and mathematics. By organizing a generic seminar on computational complexity we have aimed to attract researchers from those various subareas and foster further fruitful interactions.

### Organization of the meeting

43 researches from around the world participated in the seminar including a substantial number of young researchers. Each day, Monday to Thursday, we started by a longer talk surveying recent results in specific areas that were chosen beforehand. We had the following survey talks:

- Shubhangi Saraf: Recent developments in arithmetic circuits
- Subhash Khot: On the unique games conjecture and the approximation resistance of predicates.
- Mark Braverman: Recent progress on interactive error correction: an overview.
- Ankur Moitra: Extended formulations and information complexity.

Additionally, on Friday we started with a survey on recent progress in algorithms for matrix multiplication presented by Chris Umans. The tutorials were followed by shorter talks by other participants. Afternoons were reserved for discussions in impromptu groups. In late afternoon on Monday, Tuesday and Thursday we had several additional short talks. On Wednesday evening we organized a rump session where everyone could present an open problem or announce a new result. One of the open problems from this session on the relationship between information cost and communication complexity presented by Omri Weinstein was very recently resolved.

### Topics covered by the seminar

The talks of the workshop fit into several subareas of computational complexity. We summarize the talks next. Detailed abstracts of the talks can be found at the end of this report.

### Circuit complexity

One of the goals in circuit complexity is to prove strong lower bounds on the size of circuits computing explicit functions. Even in the case of bounded depth circuits the known lower bounds deteriorate quickly with depth. Oded Goldreich discussed approaches to prove strong lower bounds of almost the type 2^Omega(n)} in such a setting by focusing on certain kinds of multilinear functions.

Another approach to proving lower bounds was presented by Anup Rao, who showed new lower bounds for bounded-depth circuits with arbitrary gates when the fan-in of gates is strictly smaller than n.

Valentine Kabanets considered the interplay between Boolean formulas and harmonic analysis of functions computed by Boolean formulas. He showed that functions represented by sub-quadratic formulas over the basis AND, OR and NOT have constrained Fourier coefficients. Among other things, this fact leads to new learning algorithms for such functions.

Shubhangi Saraf reviewed recent progress towards separating Valiant's classes VP and VNP, the arithmetic analogues of P and NP.

Eric Allender in his talk focused on another aspect of circuit complexity by providing improved upper bounds on the level of counting hierarchy in which certain problems involving arithmetic circuits lie.

Beside proving lower bounds several talks also focused on algorithmic aspects of circuits. Kristoffer Arnsfelt Hansen discussed the circuit complexity of several graph problems when the graphs have bounded cut-width, and Swastik Kopparty showed in his talk an efficient way of indexing irreducible polynomials over finite fields which may serve as a useful tool in designing efficient arithmetic circuits.

Amir Yehudayoff studied the growth rate of symmetric polynomials with possible applications in pseudorandomness.

### Communication complexity and its applications

The classical theory of error correcting codes addresses mainly the question of one-way communication over unreliable channel. In communication complexity the main issue is to minimize the amount of communication between two interacting parties whose goal is to evaluate some joint function of their respective inputs. In this scenario the communication goes both ways. Mark Braverman gave a summary of results on error correcting techniques when the two parties communicate over unreliable channel.

Pavel Pudlák presented approaches to constructing good error correcting codes for interactive communication (so-called tree codes) based on properties of certain matrices.

Another popular research topic in communication complexity is information complexity. This topic was discussed by Omri Weinstein. He showed a new technique to estimate interactively the amount of information leaked by the two players about their inputs during a two party communication. This might have applications for secure communication.

Hartmut Klauck presented an interplay between quantum and classical communication, and established that in certain setting quantum communication can be replaced by classical messages.

Mike Saks provided a surprisingly simple protocol for certain class of functions in the number-on-the-forehead multi-party model.

Ankur Moitra presented a survey on recent results regarding extended formulation approach to solving hard combinatorial problems. In this context he also successfully applied techniques from communication complexity.

Communication complexity is a major tool in the analysis of data stream algorithms, algorithms that can process huge data sets while utilizing only little memory. David Woodruff presented a surprising fundamental result showing that a large class of streaming algorithms can be simulated using only linear sketches of the data stream. This could simplify design of data stream algorithms.

Amit Chakrabarti considered a model for processing large data streams with the help of an untrusted but powerful helper (e.g. cloud service). He discussed a relationship between this model and Arthur Merlin communication protocols.

### Inapproximability

When we lack efficient algorithms for various problems that are NP-complete we may try to solve them approximately. In some cases, even that is hard as demonstrated by Prahladh Harsha in his talk on inapproximability of coloring of hypergraphs.

On the other hand, Johan Hastad presented a new algorithm for finding a satisfying solution to a CNF formula when all clauses in the formula can simultaneously be satisfied by majority of their literals. When the formula does not have such a property the problem becomes NP-complete.

Irit Dinur discussed her results on testing whether a given function is a direct product of some function with application to parallel repetition, and Eli Ben-Sasson explained his result on constructing linear-size probabilistically checkable proofs (PCP) that can be checked using n^epsilon queries.

### Pseudorandomness

Construction of pseudorandom generators for Boolean circuits is currently reasonably well understood. However, in non-Boolean setting such as in the case of multi-output functions or arithmetic circuits we still lack good understanding of the problem. Ronen Shaltiel presented pseudorandom generators with optimal seed length for multi-output functions computed by polynomial size circuits, and Amnon Ta-Shma presented a new construction of hitting set generators for low-degree polynomials.

A central problem for which we know a very efficient randomized algorithm but no deterministic one is the problem of testing whether a polynomial is identically zero. Meena Mahajan considered the problem of testing whether a polynomial represented by an arithmetic formula that reads each variable at most three times is zero or not. She provided a deterministic algorithm for this problem.

Amir Shpilka presented a new algorithm for the closely related problem of testing whether two polynomials are the same up to a linear transformation of variables.

### Other models

Harry Buhrman presented a new model of computation, catalytic space, in which in addition to the usual limited work space we have essentially unlimited amount of extra space which is however full of data that have to be preserved. He exhibited the surprising power of this extra space that allows one to compute functions that we do not know how to compute using only the limited work space.

Matthias Krause discussed the issue of cryptographic authentication by devices with limited resources which are not able to evaluate the standard cryptographic primitives like RSA and AES. He proposed solutions for those situations and reported on an actual implementation.

Jaikumar Radhakrishnan considered the bit-probe complexity of a data structure for storing sets (set-membership problem). He presented a very elegant and more efficient solution for this problem.

Thomas Thierauf presented an algorithm to compute the number of perfect matchings in K_5-free graphs. In general graphs this problem is considered to be hard.

Till Tantau talked about parallel algorithms in the context of fixed parameter tractability. He defined the notion and presented parallel algorithms in that context.

Chris Umans presented an overview of recent progress on matrix multiplication.

### Conclusion

Understanding the computational complexity of various problems is the primary goal of theory of computing. In the past several years there has be tremendous progress in various areas of complexity for example, in communication complexity, arithmetic circuit complexity and derandomization. This progress brings us closer to the goal of understanding computation. Yet, as we have seen new relevant concepts and models emerge, e.g., information cost and catalytic computation. Despite all the progress that have been achieved since our previous meeting three years ago, and in the light of the new developments, there is a general consensus among the participants of the seminar that there is still long way ahead of us before we gain a good understanding of limits of efficient computation and resolve many of the central problems in computational complexity.

We like to thank the staff at Dagstuhl who -- as usual -- provided a marvelous surrounding to make this a successful meeting with ample space for undisturbed interactions between the participants.

**Summary text license**

Creative Commons BY 3.0 Unported license

Anna Gál, Michal Koucký, Oded Regev, and Rüdiger Reischuk

## Dagstuhl-Seminar Series

- 21121: "Computational Complexity of Discrete Problems" (2021)
- 19121: "Computational Complexity of Discrete Problems" (2019)
- 17121: "Computational Complexity of Discrete Problems" (2017)
- 11121: "Computational Complexity of Discrete Problems" (2011)
- 08381: "Computational Complexity of Discrete Problems" (2008)
- 06111: "Complexity of Boolean Functions " (2006)
- 04141: "Complexity of Boolean Functions" (2004)
- 02121: "Complexity of Boolean Functions" (2002)
- 99441: "Complexity of Boolean Functions" (1999)
- 9711: "Complexity of Boolean Functions" (1997)
- 9235: "Complexity and Realization of Boolean Functions" (1992)

## Classification

- Data Structures / Algorithms / Complexity

## Keywords

- Computational complexity
- Discrete problems
- Turing machines
- Circuits
- Pseudorandomness
- Derandomization
- Computational learning
- Communication complexity
- Query complexity
- Hardness of approximation