https://www.dagstuhl.de/16411

### October 9 – 14 , 2016, Dagstuhl Seminar 16411

# Algebraic Methods in Computational Complexity

## Organizers

Valentine Kabanets (Simon Fraser University – Burnaby, CA)

Thomas Thierauf (Hochschule Aalen, DE)

Jacobo Torán (Universität Ulm, DE)

Christopher Umans (CalTech – Pasadena, US)

## For support, please contact

## Documents

Dagstuhl Report, Volume 6, Issue 10

Aims & Scope

List of Participants

Dagstuhl's Impact: Documents available

## Summary

The seminar brought together more than 40 researchers covering a wide spectrum of complexity theory. The focus on algebraic methods showed the great importance of such techniques for theoretical computer science. We had 25 talks, most of them lasting about 40 minutes, leaving ample room for discussions. In the following we describe the major topics of discussion in more detail.

#### Circuit Complexity

This is an area of fundamental importance to Complexity. Circuit Complexity was one of the main topics in the seminar. Still it remains a big challenge to prove strong upper and lower bounds. Also Polynomial Identity Testing (PIT) plays a central role.

The seminar started with a talk by *Steve Fenner*. In a breakthrough result, he showed how to solve the perfect matching problem in bipartite graphs (almost) efficiently in parallel, by circuits of quasi-polynomial size and O(log^2 n) depth (in quasi-NC). This solves a problem open since more than 30 years. *Rohit Gurjar*} showed how to extend the result even further to *linear matroid intersection*, where bipartite perfect matching is a special case of.

Both of the above results can be read as a singularity test of certain symbolic matrices. We had several talks dealing with determining singularity or computing the rank of a symbolic matrix. *Rafael Oliveira* presented an efficient algorithm for the symbolic singularity problem in the *non-commutative* setting. In the *commutative* setting, the complexity is a major open problem. Many other important problems reduce to it.
*Markus Bläser* presented an *approximation algorithm* (PTAS) for the rank of a symbolic matrix. Surprisingly, this is achieved with a greedy-algorithm. *Kristoffer Hansen* showed a different kind of approximation for low rank binary matrices.

We have seen some great work on *Polynomial Identity Testing* (PIT)
and circuit lower bounds recently, in particular on depth-3 and depth 4 circuits, and on arithmetic branching programs, which has brought us very
close to statements that are known to imply VP not= VNP, the analogue of the P vs. NP question in the arithmetic world. With respect to PIT, an ambitious goal is to come up with a hitting set construction for a specific model. A hitting set is a set of instances such that every non-zero polynomial in the model has a non-root in the set. This would solve the PIT problem in the *black box* model.

PIT is known to be efficiently solvable by *randomized* algorithms,
for example when we consider arithmetic circuits. Things get a bit different when we consider *noncummutative* circuits. Now the standard test cannot be directly applied because the polynomials can have exponential degree, and hence doubly exponentially many monomials. *V. Arvind* presented
a randomized polynomial identity test for noncommutative arithmetic circuits
for the case when the polynomial has only exponentially many monomials.

One of the most successful methods for proving lower bounds for arithmetic circuits is to consider the dimension of the span of the *partial derivatives* of a polynomial. *Pascal Koiran* considered the complexity of the problem to compute this dimension. He showed that it is #P-hard. It remained open whether the problem is #P-complete.

Another important notion when proving lower bounds is the *algebraic independence* of arithmetic circuits. In 2015, Kumar and Saraf presented lower bounds and hitting sets for a class of depth-4 circuits that have low algebraic rank. Unfortunately, their technique requires base fields of characteristic zero, or at least exponentially large characteristic.
*Nitin Saxena* closed this gap and showed how to make the approach work over *every* field.

*Michael Forbes* showed that lower bounds for certain algebraic circuits imply lower bounds in proof complexity.

*Or Meir* talked on one of the major open problems in complexity theory: proving super-polynomial lower bounds on the size of formulas.
Karchmer, Raz, and Wigderson suggested an approach to this problem. The *KRW-conjecture* states that the formula complexity of two functions f and g roughly adds up when we consider the composed function gcirc f.
They showed that the conjecture implies super-polynomial formula lower bounds
In his talk, Or Meir did a step to prove the conjecture: he proved a special case, namely when f is the parity-function. His proof uses techniques from communication complexity.

Valiant introduced the arithmetic analogue of classes P and NP.
Very roughly, the class VP contains all multivariate polynomials that can be computed (non-uniformly) by polynomial-size arithmetic circuits,
and the class VNP contains all multivariate polynomials that have coefficients
computable by VP-circuits. The question whether VP is different from VNP
plays the role of the P-NP question in algebraic complexity theory.
Valiant showed that the permanent is complete for VNP. But for VP, only artificially constructed functions were known to be complete. In her talk, *Meena Mahajan* described several polynomial families complete for VP and for VNP, based on the notion of graph homomorphism polynomials.

#### Complexity

Since the famous AKS-primality test, prime numbers can be recognized efficiently. The *construction* of prime numbers is still a challenging task. The best known deterministic algorithm have only exponential running time. *Rahul Santhanam* presented a randomized subexponential time algorithm that outputs primes, and only primes, with high probability,
and moreover, the output is mostly the same prime. This is called a *zero-error pseudo-deterministic* algorithm.

Since the famous Isolation Lemma of Mulmuley, Vazirani, Vazirani, researchers recognized the power of isolation. For example, the bipartite perfect matching and the matroid intersection algorithms mentioned above, both rely on isolating a minimum weight solution, *Nutan Limaye* studied the problem of isolating an s-t-path in a directed graph. She proved that a randomized logspace algorithm that isolates such a path can be used to show NL subseteq L/poly.

Derandomization is an area where there are tight connections between lower bounds and algorithms. Strong enough circuit lower bounds can be used to construct pseudo-random generators that can then be used to simulate randomized algorithms with only polynomial overhead. The polynomial overhead is fine for algorithms running in polynomial time. However, in case of subexponential randomized algorithms, this overhead makes the resulting deterministic algorithm more or less useless. *Ronen Shaltiel* showed how to overcome this problem by achieving a more modest overhead. He needs, however, stronger lower bounds to begin with. Further talks on pseudo-random generators and randomness extractors were given by *Amnon Ta-Shma* and *William Hoza*.

*Chris Umans* gave an evening talk presenting a recent breakthrough in additive combinatorics, the resolution of the so-called *cap-set conjecture* by Ellenberg and Gijswijt. This result has implications for the Cohn-Umans group-theoretic approach for matrix multiplication, and elsewhere in Complexity.

#### Coding Theory

Error-correcting codes, particularly those constructed from polynomials, i.e. Reed-Solomon codes or Reed-Muller codes, lie at the heart of many
significant results in Computational Complexity. *Shubhangi Saraf* gave a talk on locally-correctable and locally-testable codes. *Swastik Kopparty* generalized the well known decoding algorithm for Reed-Solomon codes to higher dimensions. He presented an efficient algorithm to decode Reed-Muller codes when the evaluation points are an arbitrary product set S^m,
for some m, when S is larger than the degree of the polynomials.

#### Conclusion

As is evident from the list above, the talks ranged over a broad assortment of subjects with the underlying theme of using algebraic and combinatorial techniques. It was a very fruitful meeting and has hopefully initiated new directions in research. Several participants specifically mentioned that they appreciated the particular focus on a common class of *techniques* (rather than end results) as a unifying theme of the workshop. We look forward
to our next meeting!

**Summary text license**

Creative Commons BY 3.0 Unported license

Valentine Kabanets, Thomas Thierauf, Jacobo Torán, and Christopher Umans

## Dagstuhl Seminar Series

- 18391: "Algebraic Methods in Computational Complexity" (2018)
- 14391: "Algebra in Computational Complexity" (2014)
- 12421: "Algebraic and Combinatorial Methods in Computational Complexity" (2012)
- 09421: "Algebraic Methods in Computational Complexity" (2009)
- 07411: "Algebraic Methods in Computational Complexity " (2007)
- 04421: "Algebraic Methods in Computational Complexity" (2004)
- 02421: "Algebraic Methods in Quantum and Classical Models of Computation" (2002)

## Classification

- Data Structures / Algorithms / Complexity
- Security / Cryptology

## Keywords

- Computational complexity
- Algebra
- (de-) randomization
- Circuits
- Coding