TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 14391

Algebra in Computational Complexity

( 21. Sep – 26. Sep, 2014 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/14391

Organisatoren

Kontakt


Impacts

Motivation

At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples.

The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds. The so-called chasm at depth 4, quite recently improved to a chasm at depth 3, suggests that the restricted models now being considered are not so far from ones that would lead to a general result.

There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Representation theory has emerged as an important tool in three separate lines of work: the Geometric Complexity Theory approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory (which has found many applications within computational complexity) has seen several algebraic innovations in recent years, including multiplicity codes, matching vector codes, and new lower bounds.

The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar can play an important role in educating a diverse community about the latest new techniques, spurring further progress.


Summary

The seminar brought together almost 50 researchers covering a wide spectrum of complexity theory. The focus on algebraic methods showed the great importance of such techniques for theoretical computer science. We had 25 talks, most of them lasting about 40 minutes, leaving ample room for discussions. In the following we describe the major topics of discussion in more detail.

Circuit Complexity

This is an area of fundamental importance to Complexity. Circuit Complexity was one of the main topics in the seminar. Still it remains a big challenge to prove strong upper and lower bounds. However, the speakers reported amazing progress in various directions.

Or Meir talked on one of the major open problems in complexity theory: proving super-logarithmic lower bounds on the depth of circuits. That is, separating the log-depth circuit class NC^1 from polynomial time, P. Karchmer, Raz, and Wigderson suggested an approach to this problem. The KRW-conjecture states that the circuit depth of two functions f and g adds up when we consider the composed function gcirc f. They showed that the conjecture implies a separation of NC^1 from P. In his talk, Or Meir presented a natural step in this direction, which lies between what is known and the original conjecture: he showed that an analogue of the conjecture holds for the composition of a function with a universal relation. The main technical tool is to use information complexity to analyze certain communication problems.

A core theme in circuit complexity is depth-reduction: very roughly, these are techniques to reduce the depth of a given circuit without increasing its size too much. The classic work of Valiant, Skyum, Berkowitz and Rackoff shows that any polynomial size arithmetic circuit has an equivalent circuit of polynomial size and log^2 n depth, where n is the number of input variables. Further impedus was given by Agrawal and Vinay who pushed the depth reduction to constant depth, thereby establishing the chasm at depth 4. It states that exponential lower bounds for circuits of depth 4 already give such bounds for general circuits. This was further improved by Koiran and by Tavenas.

Ramprasad Saptharishi gave a slightly different proof of the depth reduction of Tavenas in his talk. Thereby he was able to apply the technique to homogeneous formulas and constant depth formulas.

Chandan Saha presented a very strong result: an exponential lower bound for homogeneous depth-4 circuits that comes close to the chasm-barrier. His techniques also yield exponential lower bounds for certain nonhomogeneous depth-3 circuits. Having the parameters so close to the bounds coming from depth reduction make these results really exciting.

Depth reduction is also an crucial ingredient in Pascal Koirans talk. He presented a new version of the tau-conjecture for Newton polygons of bivariate polynomials. The tau-conjecture was originally stated by Shub and Smale:
the number of integer roots of a univariate polynomial should be polynomially bounded in the size of the smallest straight-line program computing it.
Pascal Koiran proposed a new version of the tau-conjecture in his talk:
when a bivariate polynomial is expressed as a sum of products of sparse polynomials, the number of edges of its Newton polygon is polynomially bounded in the size of such an expression.

If this new conjecture is true, then the permanent polynomial cannot be computed by polynomial-size arithmetic circuits.

Spurred by the depth reduction results, we have seen some great work on Polynomial Identity Testing (PIT) recently, in particular on depth-3 and depth 4 circuits, and on arithmetic branching programs. The most ambitious goal here is to come up with a hitting set construction for a specific model. A hitting set is a set of instances such that every non-zero polynomial in the model has a non-root in the set. This solves the PIT problem in the black box model.

Rohit Gurjar and Arpita Korwar gave a joint talk on PIT for read-once arithmetic branching programs. They presented a new technique called basis isolating weight assignment. These weight assignments yield a hitting set in quasi-polynomial time.

Michael Forbes considered the question whether the hitting set constructions running in quasi-polynomial time can be improved to polynomial time. He showed that in the case of depth-3 powering circuits (sums of powers of linear polynomials) one can obtain a hitting set of size poly(s)^{loglog s} for circuits of size s,which is pretty close to resolving the black-box identity testing problem for this class in polynomial time.

Swastik Kopparty showed the computational equivalence of factoring multivariate polynomials and PIT. For both problems we have efficient randomized algorithms. The question whether these algorithms can be derandomized are central in arithmetic complexity. Swastik established that they are equivalent.

Valiant introduced the arithmetic analogue of classes P and NP. Very roughly, the class VP contains all multivariate polynomials that can be computed (non-uniformly) by polynomial-size arithmetic circuits, and the class VNP contains all multivariate polynomials that have coefficients computable by VP-circuits. The question whether VP is different from VNP plays the role of the P-NP question in algebraic complexity theory. Valiant showed that the permanent is complete for VNP. But for VP, only artificially constructed functions were known to be complete. In her talk, Meena Mahajan described several natural complete polynomials for VP, based on the notion of graph homomorphism polynomials.

Eric Allender defined a class called LambdaP which is in some sense dual to VP. Over finite fields, VP can be characterized by SAC^1, the class of logarithmic depth, polynomial-size semi-unbounded fan-in circuits (with bounded fan-in multiplication gates and unbounded fan-in addition gates). Eric defined the dual class LambdaP in the same way, but with unbounded fan-in multiplication gates and bounded fan-in addition gates. He showed new characterizations of the complexity classes ACC^1 and TC^1 based on LambdaP.

Klaus-Joern Lange defined a completeness notion on families of languages, called densely complete. He showed that the context-free languages are densely complete in SAC^1 via many-one AC^0-reductions.

Complexity

Ryan Williams once again demonstrated a fruitful interplay between algorithms and complexity. In his famous ACC-paper, he showed how to use fast algorithms for circuit satisfiability to prove lower bounds with respect to the class ACC. In his present talk, Ryan reversed the direction and showed how to exploit techniques from complexity to obtain faster algorithms for the all-pairs shortest paths problem (APSP). He improved the running time from n^3/log^2 n previously to n^3/2^Omega(sqrtlog n). The big question here is whether one can improve the running time to n^3-epsilon for some epsilon > 0. A crucial role in the new algorithm plays the polynomial method of Razborov and Smolensky, originally conceived for proving low-depth circuit lower bounds.

Michal Koucký talked on a model of computation he calls catalytic computation. In this model, a machine has only limited memory available, but has additionally access to almost unlimited amount of disk space, the catalytic memory. This disk is however already full of data. The machine has read-write access to the disk so that it can modify the content of the disk. However, at the end of a computation, the content of the catalytic memory has to be in its original state. The question now is whether the catalytic memory is of any use. Michal showed that a logspace bounded machine with a catalytic memory can do all of nondeterministic logspace. Hence, surprisingly, the catalytic memory really helps, unless L=NL.

Amnon Ta-Shma talked on the problem of approximating the eigenvalues of stochastic Hermitian matrices. In an earlier paper he had shown that this is possible in probabilistic logspace in the quantum model of computation, i.e. in BQL. In this talk, Amnon was asking whether this is also possible in probabilistic logspace in the classic world, i.e. in BPL. He showed that how to achieve approximations with constant accuracy. To bring the problem into BPL, one would have to approximate the eigenvalues with polynomially small accuracy. This remains open for now.

Venkatesan Guruswami condidered the following promise version of the satisfiability problem: Given a k-SAT instance with the promise that there is an assignment satisfying at least t out of k literals in each clause, can one efficiently find a satisfying assignment? Because 3-SAT is NP-hard, the promise problem is NP-hard for t leq k/3. On the other hand, 2-SAT is efficiently solvable. Extensions of the 2-SAT algorithm show that the promis problem is efficiently solvable for t geq k/2. Venkatesan showed a sharp borderline for the promise problem: it is NP-hard for t < k/2. The proof uses part of the PCP-machinery.

Communication Complexity

Amir Yehudayoff talked on communication complexity in the number on the forehead model. He considered the disjointness problem: there are k players, each having a set of numbers from [n]. A player can see the numbers of all the other players, but not his own numbers. The task of the payers is to determine, whether there is a number common to all sets. Amir showed a lower bound for the deterministic communication complexity of order n/4^k. This is quite amazing since it nearly matches the known upper bound, which is of order k^2 n/2^k.

Arkadev Chattopadhyay talked on a communication model, where the inputs are distributed among the vertices of an undirected graph. The vertices coorespond to processors, each processor can send messages only to its neighbors in the graph. Arkadev showed lower bounds on the communication cost for computing certain functions in this model.

Rahul Santhanam considered a communication model called compression game. There are two players, Alice and Bob. Alice receives the whole input x and is computationally bounded, by AC^0[p] in this case, for some prime p. Bob has no information about x and is computationally unbounded. The communication cost of some function f is the number of bits Alice sent to Bob until they agree on the value f(x). Rahul showed a lower bound on the communication complexity of the Mod_q-function, for any prime q ot= p.

Coding Theory

Error-correcting codes, particularly those constructed from polynomials, lie at the heart of many significant results in Computational Complexity. Usually, error correcting codes are studied with respect to the Hamming distance. Another model is that of random errors. Amir Shpilka in his talk considered the behaviour of Reed-Muller codes in the Shannon model of random errors. He showed that the rate for Reed-Muller codes with either low- or high-degree achieves (with high probability) the capacity for the Binary-Erasure-Channel

David Zuckerman talked on the relatively new concept of non-malleable codes which was introduced by Dziembowski, Pietrzak, and Wichs in 2010. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Non-malleable codes provide an elegant algorithmic solution to the task of protecting hardware functionalities against “tampering attacks”. David showed how to construct efficient non-malleable codes in the so-called C-split-state model that achieve constant rate and exponentially small error.

Game Theory

Steve Fenner considered the following two-player game on a finite partially odered set (poset) S: each player takes turns picking an element x of S and removes all y > x from S. The first one to empty the poset wins. Daniel Grier showed that determining the winner of a poset game is PSPACE-complete. Steve considered the black-white version of the game, where each player and each element of S is assigned a color, black or white. Each player is only allowed to remove elements of their own color. He showed that also this black-white version of the poset game is PSPACE-complete. This is the first PSPACE-hardness result known for a purely numerical game. Another interesting result was that the game NimG, a generalization of both Nim and Geography, is polynomial-time solvable when restricted to undirected, bipartite graphs, whereas NimG is known to be PSPACE-complete for general graphs, both directed and undirected.

Bill Gasarch talked on a variant of classical NIM, where there is only one pile of stones and and a given set {a_1, a_2,..., a_k} of numbers. A move consists of choosing a number a_i from the set and then removing a_i stones from the pile. The first player who cannot move loses the game. This game has already been well studied. Bill considered an extension of the game where each player starts out with a number of dollars. Now each player has to spend a dollars to remove a stones. He presented some surprising results on the winning conditions for the extended game.

Cryptography

Farid Ablayev generalized classical universal hashing to the quantum setting. He defined the concept of a quantum hash generator and offer a design, which allows one to build a large number of different quantum hash functions. One of the important points here is to use unly few quantum bits. Farid proved that his construction is optimal with respect to the number of qubits needed.

Matthias Krause talked on approaches for designing authentication protocols for ultra-light weight devices as for example RFID chips. He proposed a new approach based on key stream generators as the main building block.

Conclusion

As is evident from the list above, the talks ranged over a broad assortment of subjects with the underlying theme of using algebraic and combinatorial techniques. It was a very fruitful meeting and has hopefully initiated new directions in research. Several participants specifically mentioned that they appreciated the particular focus on a common class of techniques (rather than end results) as a unifying theme of the workshop. We look forward to our next meeting!

Copyright Manindra Agrawal, Valentine Kabanets, Thomas Thierauf, and Christopher Umans

Teilnehmer
  • Farid Ablayev (Kazan State University, RU) [dblp]
  • Manindra Agrawal (Indian Institute of Technology - Kanpur, IN) [dblp]
  • Eric Allender (Rutgers University - Piscataway, US) [dblp]
  • Vikraman Arvind (The Institute of Mathematical Sciences, IN) [dblp]
  • Markus Bläser (Universität des Saarlandes, DE) [dblp]
  • Andrej Bogdanov (The Chinese University of Hong Kong, HK) [dblp]
  • Harry Buhrman (CWI - Amsterdam, NL) [dblp]
  • Sourav Chakraborty (Chennai Mathematical Institute, IN) [dblp]
  • Arkadev Chattopadhyay (TIFR Mumbai, IN) [dblp]
  • Stephen A. Fenner (University of South Carolina - Columbia, US) [dblp]
  • Michael A. Forbes (University of California - Berkeley, US) [dblp]
  • Lance Fortnow (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Anna Gál (University of Texas - Austin, US) [dblp]
  • William Gasarch (University of Maryland - College Park, US) [dblp]
  • Frederic Green (Clark University - Worcester, US) [dblp]
  • Rohit Gurjar (Indian Institute of Technology - Kanpur, IN) [dblp]
  • Venkatesan Guruswami (Carnegie Mellon University, US) [dblp]
  • Valentine Kabanets (Simon Fraser University - Burnaby, CA) [dblp]
  • Marek Karpinski (Universität Bonn, DE) [dblp]
  • Neeraj Kayal (Microsoft Research India - Bangalore, IN) [dblp]
  • Pascal Koiran (ENS - Lyon, FR) [dblp]
  • Swastik Kopparty (Rutgers University - Piscataway, US) [dblp]
  • Arpita Korwar (Indian Institute of Technology - Kanpur, IN) [dblp]
  • Michal Koucký (Charles University - Prague, CZ) [dblp]
  • Matthias Krause (Universität Mannheim, DE) [dblp]
  • Klaus-Jörn Lange (Universität Tübingen, DE) [dblp]
  • Sophie Laplante (University of Paris VII, FR) [dblp]
  • Meena Mahajan (The Institute of Mathematical Sciences, IN) [dblp]
  • Or Meir (Institute of Advanced Study - Princeton, US) [dblp]
  • Peter Bro Miltersen (Aarhus University, DK) [dblp]
  • Natacha Portier (ENS - Lyon, FR) [dblp]
  • Atri Rudra (SUNY - Buffalo, US) [dblp]
  • Chandan Saha (Indian Institute of Science - Bangalore, IN) [dblp]
  • Rahul Santhanam (University of Edinburgh, GB) [dblp]
  • Ramprasad Saptharishi (Microsoft Research India - Bangalore, IN) [dblp]
  • Uwe Schöning (Universität Ulm, DE) [dblp]
  • Ronen Shaltiel (University of Haifa, IL) [dblp]
  • Amir Shpilka (Technion - Haifa, IL) [dblp]
  • Florian Speelman (CWI - Amsterdam, NL) [dblp]
  • Amnon Ta-Shma (Tel Aviv University, IL) [dblp]
  • Thomas Thierauf (Hochschule Aalen, DE) [dblp]
  • Jacobo Torán (Universität Ulm, DE) [dblp]
  • Christopher Umans (CalTech - Pasadena, US) [dblp]
  • Nikolay K. Vereshchagin (Moscow State University, RU) [dblp]
  • Ryan Williams (Stanford University, US) [dblp]
  • Amir Yehudayoff (Technion - Haifa, IL) [dblp]
  • David Zuckerman (University of Texas - Austin, US) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 02421: Algebraic Methods in Quantum and Classical Models of Computation (2002-10-13 - 2002-10-18) (Details)
  • Dagstuhl-Seminar 04421: Algebraic Methods in Computational Complexity (2004-10-10 - 2004-10-15) (Details)
  • Dagstuhl-Seminar 07411: Algebraic Methods in Computational Complexity (2007-10-07 - 2007-10-12) (Details)
  • Dagstuhl-Seminar 09421: Algebraic Methods in Computational Complexity (2009-10-11 - 2009-10-16) (Details)
  • Dagstuhl-Seminar 12421: Algebraic and Combinatorial Methods in Computational Complexity (2012-10-14 - 2012-10-19) (Details)
  • Dagstuhl-Seminar 16411: Algebraic Methods in Computational Complexity (2016-10-09 - 2016-10-14) (Details)
  • Dagstuhl-Seminar 18391: Algebraic Methods in Computational Complexity (2018-09-23 - 2018-09-28) (Details)
  • Dagstuhl-Seminar 22371: Algebraic and Analytic Methods in Computational Complexity (2022-09-11 - 2022-09-16) (Details)
  • Dagstuhl-Seminar 24381: Algebraic and Analytic Methods in Computational Complexity (2024-09-15 - 2024-09-20) (Details)

Klassifikation
  • data structures / algorithms / complexity

Schlagworte
  • Computational Complexity
  • Algebra
  • Circuits
  • Lower Bounds
  • Coding Theory