##### Dagstuhl Seminar 18391

### Algebraic Methods in Computational Complexity

##### ( Sep 23 – Sep 28, 2018 )

##### Permalink

##### Organizers

- Markus Bläser (Universität des Saarlandes, DE)
- Valentine Kabanets (Simon Fraser University - Burnaby, CA)
- Jacobo Torán (Universität Ulm, DE)
- Christopher Umans (Caltech - Pasadena, US)

##### Contact

- Michael Gerke (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)

##### Impacts

- Mathematical Muffin Morsels : Nobody Wants a Small Piece - Gasarch, William; Metz, Erik; Prinz, Jacob; Smolyak, Daniel - XVI, 210 S. - (Problem Solving in Mathematics and Beyond ; 16). ISBN: 978-981-121-597-1 / 981-121-597-9.
- Problems with a Point : Exploring Math and Computer Science - Gasarch, William I.; Kruskal, Clyde - Singapore : World Scientific, 2019. - xiii, 270 Seiten. ISBN: 978-981-3279-97-1 / 981-3279-97-4.

Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples.

The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called “chasm at depth 4” suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation).

Another surprising connection is that the algebraic techniques invented to show lower bounds now prove useful to develop efficient algorithms. For example, Williams showed how to use the polynomial method to obtain faster all-pair-shortest-path algorithms.

Furthermore, the program of geometric complexity initiated by Mulmuley tries to use methods from algebraic geometry and representation theory to resolve algebraic analogues of the P versus NP question.

These new directions will be in the focus of this Dagstuhl Seminar and reflect the developments in the field since the previous Seminar 14391.

The seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar will play an important role in educating a diverse community about the latest new techniques, spurring further progress.

The seminar brought together more than 40 researchers covering a wide spectrum of complexity theory. The focus on algebraic methods showed the great importance of such techniques for theoretical computer science. We had 24 talks, most of them lasting about 45 minutes, leaving ample room for discussions. We also had a much appreciated rump session on Tuesday evening in which Antonina Kolokolova, Bill Gasarch, Lance Fortnow, Chandran Saha, William Hoza, Neeraj Kajal and Arpita Korwar presented some open questions. In the following we describe the major topics of discussion in more detail.

#### Circuit Complexity

This is an area of fundamental importance to Complexity.
Circuits studied from many different perspectives were one of the main topics in the seminar. *Eric Allender* gave an overview of the Minimum Circuit Size Problem (MCSP): given the truth-table for a Boolean function, what is the size of the minimum circuit computing it? In his talk he mentioned some interesting results proving that some low complexity classes cannot be reduced to the problem of computing superlinear approximations to circuit size.

Arithmetic circuits and formulas are a special computation model that uses + and X as operators for computing polynomials instead of Boolean operations. *Nutan Limaye* presented a depth hierarchy theorem for this model showing that there is a polynomial computed by a depth D+1 polynomial sized multilinear formula such that any depth D multilinear formula computing the polynomial must have exponential size.

*Chandan Saha* considered a further restriction to depth three
circuits C computing a polynomial f = T_1 + T_2 + ... + T_s, where each T_i is a product of d linear forms in n variables. He presented a randomized algorithm to reconstruct non-degenerate homogeneous depth three circuits, for the case n > (3d)^2, given black-box access to f. The algorithm works in polynomial time in n, s and d.

Depth-2 circuits with polynomial size and linear threshold functions were
presented by *Meena Mahajan*. She surveyed the landscape below these circuits and present one new result concerning decision lists.

#### Algebraic Complexity

There were also several presentations discussing the complexity of several problems over algebraic structures.

*Nitin Saxena* considered in his talk the problem of testing whether a set F of polynomials given as algebraic circuits has an algebraic dependence. He showed that this problem can be computed in AM cap coAM thus solving an open question from 2007.

Problems related to the minimum code-word problem and the existence of
non-trivial automorphism moving few vertices in graphs or hypergraphs, were presented by *V. Arvind* in his talk. He discuss the parameterized
complexity of this and related algebraic problems.

*Josh Alman* gave an interesting talk on Matrix Multiplication (MM).
He surveyed the two main approaches for MM algorithms: the Laser method of Strassen, and the Group theoretic approach of Cohn and Umans and defined
a generalization which subsumes these two approaches. He then explained ways to obtain lower bounds for algorithms for MM when using these algorithmic methods.

*Rohit Gurjar* studied the class of matrices A for which the lattice L(A) formed by all integral vectors v in the null-space of A, has only polynomially many near-shortest vectors. He proved that this is the case
when the matrix A is totally unimodular (all sub-determinants are 0, +1, or -1). As a consequence he could show a deterministic algorithm for PIT
for any polynomial of the form det(sum x_i A_i) for rank-1 matrices A_i.

#### Pseudo-Randomness and Derandomization

Derandomization is an area where there are tight connections between lower bounds and algorithms. Strong enough circuit lower bounds can be used to construct pseudo-random generators that can then be used to deterministically simulate randomized algorithms. A central question in derandomization is whether randomized logspace RL equals deterministic logspace L. To show that RL = L, it suffices to construct explicit pseudorandom generators that fool polynomial-size read-once (oblivious) branching programs (roBPs).
There were two talks related to this question. *Michael Forbes*
presented a method to obtain an explicit PRG with seed-length O(log^3 n)
for polynomial-size roBPs reading their bits in an unknown order.
*William Hoza* gave an explicit hitting set generator for read-once branching programs with known variable order. As a corollary of this construction, it follows that every RL algorithm that uses r
random bits can be simulated by an NL algorithm that uses only O(r/log^c n) nondeterministic bits, where c is an arbitrarily large constant. Another consequence of the result is that any RL algorithm with small success probability epsilon can be simulated deterministically in space
O(log^{3/2} n+log n loglog(1/epsilon)).

A hitting set is a set of instances such that every non-zero polynomial in the
model has a non-root in the set. This would solve the Polynomial Identity Testing problem (PIT) in that model. *Ramprasad Saptharishi* showed that by barely improving the trivial (s+1)^n size hitting set even for n-variate degree s, size s algebraic circuits, we could get an almost
complete derandomization of PIT.

In a second talk, *William Hoza* talked about the possibility of derandomizing an algorithm by using randomness from the input itself.
For a language L with a bounded-error randomized algorithm in space S and time n cdot poly(S) he gave a randomized algorithm for L with the same time and space resources but using only O(S) random bits; the algorithm has a low failure probability on all but a negligible fraction of inputs of each length.

*Andrej Bogdanov* considered the problem of extracting true randomness from a set biased dice (Santha-Vazirani sources). He presented a recent result
in which he completely classified all non-trivial randomness sources of this type into: non-extractable ones, extractable from polynomially many samples, and extractable from an logarithmically many samples (in the inverse of the error).

#### Coding Theory

Error-correcting codes and other kinds of codes, particularly those constructed from polynomials, i.e. Reed-Solomon codes or Reed-Muller codes, lie at the heart of many significant results in Computational Complexity. This is an area in which the relation between different areas of complexity, like the analysis of algebraic structures or derandomization becomes especially fruitful.

Greatly improving previously known constructions for an odd size alphabet, *Michal Koucky* presented a construction of quasi-Gray codes of dimension n and length 3^n over the ternary alphabet {0,1,2} with worst-case read complexity O(log n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. These results were obtained via a novel application of algebraic tools together with the principles of catalytic computation.

*Noga Ron-Zewi* presented a very recent result showing that Folded Reed-Solomon codes achieve list decoding capacity with constant list sizes,
independent of the block length. She explained that multiplicity codes exhibit similar behavior, and used this to obtain capacity achieving locally list decodable codes with query complexity significantly lower than previous constructions.

Binary error correcting code with relative distance (1-epsilon)/2 and relative rate epsilon^{2+o(1)} were explained in one of the talks given by
*Amnon Ta-Shma*. Previous explicit constructions had rate about epsilon^3. The main tool used for this construction are *Parity Samplers*. He explained how to get better explicit parity samplers using a variant of the zig-zag product.

In his second talk, Amnon talked about (1- au,L) erasure list-decodable codes. He presented a recent work where he constructed for the first time an explicit binary (1- au,L) erasure list-decodable code having rate au^{1+gamma}
(for any constant gamma>0 and au small enough) and list-size poly(log 1/ au), exhibiting an explicit non-linear code that provably beats the best possible linear one. The main ingredient in his construction
is a new (and almost-optimal) *unbalanced* two-source extractor.

#### Quantum Complexity

Complexity issues arising in the context of quantum computation are
an important area in Complexity Theory since several decades. In this
workshop we had one talk on this topic. *Sevag Gharibian* talked about quantum versions of the classical k-SAT problem. He talked about the problem of computing satisfying assignments to k-QSAT instances which have a "matching" or "dimer covering"; this is an NP problem whose decision variant is trivial, but whose search complexity remains open. He presented a parameterized algorithm for k-QSAT instances from a non-trivial class, which allows to obtain exponential speedups over brute force methods.

#### Conclusion

As is evident from the list above, the talks ranged over a broad assortment of subjects with the underlying theme of using algebraic and combinatorial techniques. It was a very fruitful meeting and has hopefully initiated new directions in research. Several participants specifically mentioned that they appreciated the particular focus on a common class of *techniques* (rather than end results) as a unifying theme of the workshop. We look forward
to our next meeting!

- Farid Ablayev (Kazan State University, RU) [dblp]
- Eric Allender (Rutgers University - Piscataway, US) [dblp]
- Josh Alman (MIT - Cambridge, US) [dblp]
- Vikraman Arvind (Institute of Mathematical Sciences - Chennai, IN) [dblp]
- Nikhil Balaji (Universität Ulm, DE) [dblp]
- Markus Bläser (Universität des Saarlandes, DE) [dblp]
- Andrej Bogdanov (The Chinese University of Hong Kong, HK) [dblp]
- Sourav Chakraborty (Indian Statistical Institute - Kolkata, IN) [dblp]
- Stephen A. Fenner (University of South Carolina - Columbia, US) [dblp]
- Michael A. Forbes (University of Illinois - Urbana-Champaign, US) [dblp]
- Lance Fortnow (Georgia Institute of Technology - Atlanta, US) [dblp]
- Anna Gál (University of Texas - Austin, US) [dblp]
- William Gasarch (University of Maryland - College Park, US) [dblp]
- Sevag Gharibian (Universität Paderborn, DE) [dblp]
- Frederic Green (Clark University - Worcester, US) [dblp]
- Rohit Gurjar (Indian Institute of Technology - Mumbai, IN) [dblp]
- William Hoza (University of Texas - Austin, US) [dblp]
- Christian Ikenmeyer (MPI für Informatik - Saarbrücken, DE) [dblp]
- Valentine Kabanets (Simon Fraser University - Burnaby, CA) [dblp]
- Neeraj Kayal (Microsoft Research India - Bangalore, IN) [dblp]
- Pascal Koiran (ENS - Lyon, FR) [dblp]
- Antonina Kolokolova (Memorial University of Newfoundland - St. John's, CA) [dblp]
- Arpita Korwar (University Paris-Diderot, FR) [dblp]
- Michal Koucký (Charles University - Prague, CZ) [dblp]
- Nutan Limaye (Indian Institute of Technology - Mumbai, IN) [dblp]
- Zhenjian Lu (Simon Fraser University - Burnaby, CA) [dblp]
- Vladimir Lysikov (Universität des Saarlandes, DE) [dblp]
- Meena Mahajan (Institute of Mathematical Sciences - Chennai, IN) [dblp]
- David A. Mix Barrington (University of Massachusetts - Amherst, US) [dblp]
- Anurag Pandey (Universität des Saarlandes, DE) [dblp]
- Natacha Portier (ENS - Lyon, FR) [dblp]
- Noga Ron-Zewi (University of Haifa, IL) [dblp]
- Chandan Saha (Indian Institute of Science - Bangalore, IN) [dblp]
- Ramprasad Saptharishi (TIFR Mumbai, IN) [dblp]
- Nitin Saxena (Indian Institute of Technology Kanpur, IN) [dblp]
- Uwe Schöning (Universität Ulm, DE) [dblp]
- Ronen Shaltiel (University of Haifa, IL) [dblp]
- Amnon Ta-Shma (Tel Aviv University, IL) [dblp]
- Thomas Thierauf (Hochschule Aalen, DE) [dblp]
- Jacobo Torán (Universität Ulm, DE) [dblp]
- Christopher Umans (Caltech - Pasadena, US) [dblp]
- Nikolay K. Vereshchagin (NRU Higher School of Economics - Moscow, RU) [dblp]

##### Related Seminars

- Dagstuhl Seminar 02421: Algebraic Methods in Quantum and Classical Models of Computation (2002-10-13 - 2002-10-18) (Details)
- Dagstuhl Seminar 04421: Algebraic Methods in Computational Complexity (2004-10-10 - 2004-10-15) (Details)
- Dagstuhl Seminar 07411: Algebraic Methods in Computational Complexity (2007-10-07 - 2007-10-12) (Details)
- Dagstuhl Seminar 09421: Algebraic Methods in Computational Complexity (2009-10-11 - 2009-10-16) (Details)
- Dagstuhl Seminar 12421: Algebraic and Combinatorial Methods in Computational Complexity (2012-10-14 - 2012-10-19) (Details)
- Dagstuhl Seminar 14391: Algebra in Computational Complexity (2014-09-21 - 2014-09-26) (Details)
- Dagstuhl Seminar 16411: Algebraic Methods in Computational Complexity (2016-10-09 - 2016-10-14) (Details)
- Dagstuhl Seminar 22371: Algebraic and Analytic Methods in Computational Complexity (2022-09-11 - 2022-09-16) (Details)

##### Classification

- data structures / algorithms / complexity

##### Keywords

- computational complexity
- algebra
- (de)randomization
- circuits
- coding