https://www.dagstuhl.de/18391

September 23 – 28 , 2018, Dagstuhl Seminar 18391

Algebraic Methods in Computational Complexity

Organizers

Markus Bläser (Universität des Saarlandes, DE)
Valentine Kabanets (Simon Fraser University – Burnaby, CA)
Jacobo Torán (Universität Ulm, DE)
Christopher Umans (Caltech – Pasadena, US)

For support, please contact

Dagstuhl Service Team

Documents

Dagstuhl Report, Volume 8, Issue 9 Dagstuhl Report
Aims & Scope
List of Participants
Shared Documents

Summary

The seminar brought together more than 40 researchers covering a wide spectrum of complexity theory. The focus on algebraic methods showed the great importance of such techniques for theoretical computer science. We had 24 talks, most of them lasting about 45 minutes, leaving ample room for discussions. We also had a much appreciated rump session on Tuesday evening in which Antonina Kolokolova, Bill Gasarch, Lance Fortnow, Chandran Saha, William Hoza, Neeraj Kajal and Arpita Korwar presented some open questions. In the following we describe the major topics of discussion in more detail.

Circuit Complexity

This is an area of fundamental importance to Complexity. Circuits studied from many different perspectives were one of the main topics in the seminar. Eric Allender gave an overview of the Minimum Circuit Size Problem (MCSP): given the truth-table for a Boolean function, what is the size of the minimum circuit computing it? In his talk he mentioned some interesting results proving that some low complexity classes cannot be reduced to the problem of computing superlinear approximations to circuit size.

Arithmetic circuits and formulas are a special computation model that uses + and X as operators for computing polynomials instead of Boolean operations. Nutan Limaye presented a depth hierarchy theorem for this model showing that there is a polynomial computed by a depth D+1 polynomial sized multilinear formula such that any depth D multilinear formula computing the polynomial must have exponential size.

Chandan Saha considered a further restriction to depth three circuits C computing a polynomial f = T_1 + T_2 + ... + T_s, where each T_i is a product of d linear forms in n variables. He presented a randomized algorithm to reconstruct non-degenerate homogeneous depth three circuits, for the case n > (3d)^2, given black-box access to f. The algorithm works in polynomial time in n, s and d.

Depth-2 circuits with polynomial size and linear threshold functions were presented by Meena Mahajan. She surveyed the landscape below these circuits and present one new result concerning decision lists.

Algebraic Complexity

There were also several presentations discussing the complexity of several problems over algebraic structures.

Nitin Saxena considered in his talk the problem of testing whether a set F of polynomials given as algebraic circuits has an algebraic dependence. He showed that this problem can be computed in AM cap coAM thus solving an open question from 2007.

Problems related to the minimum code-word problem and the existence of non-trivial automorphism moving few vertices in graphs or hypergraphs, were presented by V. Arvind in his talk. He discuss the parameterized complexity of this and related algebraic problems.

Josh Alman gave an interesting talk on Matrix Multiplication (MM). He surveyed the two main approaches for MM algorithms: the Laser method of Strassen, and the Group theoretic approach of Cohn and Umans and defined a generalization which subsumes these two approaches. He then explained ways to obtain lower bounds for algorithms for MM when using these algorithmic methods.

Rohit Gurjar studied the class of matrices A for which the lattice L(A) formed by all integral vectors v in the null-space of A, has only polynomially many near-shortest vectors. He proved that this is the case when the matrix A is totally unimodular (all sub-determinants are 0, +1, or -1). As a consequence he could show a deterministic algorithm for PIT for any polynomial of the form det(sum x_i A_i) for rank-1 matrices A_i.

Pseudo-Randomness and Derandomization

Derandomization is an area where there are tight connections between lower bounds and algorithms. Strong enough circuit lower bounds can be used to construct pseudo-random generators that can then be used to deterministically simulate randomized algorithms. A central question in derandomization is whether randomized logspace RL equals deterministic logspace L. To show that RL = L, it suffices to construct explicit pseudorandom generators that fool polynomial-size read-once (oblivious) branching programs (roBPs). There were two talks related to this question. Michael Forbes presented a method to obtain an explicit PRG with seed-length O(log^3 n) for polynomial-size roBPs reading their bits in an unknown order. William Hoza gave an explicit hitting set generator for read-once branching programs with known variable order. As a corollary of this construction, it follows that every RL algorithm that uses r random bits can be simulated by an NL algorithm that uses only O(r/log^c n) nondeterministic bits, where c is an arbitrarily large constant. Another consequence of the result is that any RL algorithm with small success probability epsilon can be simulated deterministically in space O(log^{3/2} n+log n loglog(1/epsilon)).

A hitting set is a set of instances such that every non-zero polynomial in the model has a non-root in the set. This would solve the Polynomial Identity Testing problem (PIT) in that model. Ramprasad Saptharishi showed that by barely improving the trivial (s+1)^n size hitting set even for n-variate degree s, size s algebraic circuits, we could get an almost complete derandomization of PIT.

In a second talk, William Hoza talked about the possibility of derandomizing an algorithm by using randomness from the input itself. For a language L with a bounded-error randomized algorithm in space S and time n cdot poly(S) he gave a randomized algorithm for L with the same time and space resources but using only O(S) random bits; the algorithm has a low failure probability on all but a negligible fraction of inputs of each length.

Andrej Bogdanov considered the problem of extracting true randomness from a set biased dice (Santha-Vazirani sources). He presented a recent result in which he completely classified all non-trivial randomness sources of this type into: non-extractable ones, extractable from polynomially many samples, and extractable from an logarithmically many samples (in the inverse of the error).

Coding Theory

Error-correcting codes and other kinds of codes, particularly those constructed from polynomials, i.e. Reed-Solomon codes or Reed-Muller codes, lie at the heart of many significant results in Computational Complexity. This is an area in which the relation between different areas of complexity, like the analysis of algebraic structures or derandomization becomes especially fruitful.

Greatly improving previously known constructions for an odd size alphabet, Michal Koucky presented a construction of quasi-Gray codes of dimension n and length 3^n over the ternary alphabet {0,1,2} with worst-case read complexity O(log n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. These results were obtained via a novel application of algebraic tools together with the principles of catalytic computation.

Noga Ron-Zewi presented a very recent result showing that Folded Reed-Solomon codes achieve list decoding capacity with constant list sizes, independent of the block length. She explained that multiplicity codes exhibit similar behavior, and used this to obtain capacity achieving locally list decodable codes with query complexity significantly lower than previous constructions.

Binary error correcting code with relative distance (1-epsilon)/2 and relative rate epsilon^{2+o(1)} were explained in one of the talks given by Amnon Ta-Shma. Previous explicit constructions had rate about epsilon^3. The main tool used for this construction are Parity Samplers. He explained how to get better explicit parity samplers using a variant of the zig-zag product.

In his second talk, Amnon talked about (1- au,L) erasure list-decodable codes. He presented a recent work where he constructed for the first time an explicit binary (1- au,L) erasure list-decodable code having rate au^{1+gamma} (for any constant gamma>0 and au small enough) and list-size poly(log 1/ au), exhibiting an explicit non-linear code that provably beats the best possible linear one. The main ingredient in his construction is a new (and almost-optimal) unbalanced two-source extractor.

Quantum Complexity

Complexity issues arising in the context of quantum computation are an important area in Complexity Theory since several decades. In this workshop we had one talk on this topic. Sevag Gharibian talked about quantum versions of the classical k-SAT problem. He talked about the problem of computing satisfying assignments to k-QSAT instances which have a "matching" or "dimer covering"; this is an NP problem whose decision variant is trivial, but whose search complexity remains open. He presented a parameterized algorithm for k-QSAT instances from a non-trivial class, which allows to obtain exponential speedups over brute force methods.

Conclusion

As is evident from the list above, the talks ranged over a broad assortment of subjects with the underlying theme of using algebraic and combinatorial techniques. It was a very fruitful meeting and has hopefully initiated new directions in research. Several participants specifically mentioned that they appreciated the particular focus on a common class of techniques (rather than end results) as a unifying theme of the workshop. We look forward to our next meeting!

Summary text license
  Creative Commons BY 3.0 Unported license
  Markus Bläser, Valentine Kabanets, Jacobo Torán, and Christopher Umans

Dagstuhl Seminar Series

Classification

  • Data Structures / Algorithms / Complexity

Keywords

  • Computational complexity
  • Algebra
  • (de)randomization
  • Circuits
  • Coding

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.