LIPIcs, Volume 50

31st Conference on Computational Complexity (CCC 2016)



Thumbnail PDF

Publication Details

  • published at: 2016-05-19
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-008-8
  • DBLP: db/conf/coco/coco2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 50, CCC'16, Complete Volume

Authors: Ran Raz


Abstract
LIPIcs, Volume 50, CCC'16, Complete Volume

Cite as

31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{raz:LIPIcs.CCC.2016,
  title =	{{LIPIcs, Volume 50, CCC'16, Complete Volume}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016},
  URN =		{urn:nbn:de:0030-drops-58590},
  doi =		{10.4230/LIPIcs.CCC.2016},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers

Authors: Ran Raz


Abstract
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers

Cite as

31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{raz:LIPIcs.CCC.2016.0,
  author =	{Raz, Ran},
  title =	{{Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.0},
  URN =		{urn:nbn:de:0030-drops-58227},
  doi =		{10.4230/LIPIcs.CCC.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers}
}
Document
Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits

Authors: Ruiwen Chen, Rahul Santhanam, and Srikanth Srinivasan


Abstract
We show average-case lower bounds for explicit Boolean functions against bounded-depth threshold circuits with a superlinear number of wires. We show that for each integer d > 1, there is epsilon_d > 0 such that Parity has correlation at most 1/n^{Omega(1)} with depth-d threshold circuits which have at most n^{1+epsilon_d} wires, and the Generalized Andreev Function has correlation at most 1/2^{n^{Omega(1)}} with depth-d threshold circuits which have at most n^{1+epsilon_d} wires. Previously, only worst-case lower bounds in this setting were known [Impagliazzo/Paturi/Saks, SIAM J. Comp., 1997]. We use our ideas to make progress on several related questions. We give satisfiability algorithms beating brute force search for depth-$d$ threshold circuits with a superlinear number of wires. These are the first such algorithms for depth greater than 2. We also show that Parity cannot be computed by polynomial-size AC^0 circuits with n^{o(1)} general threshold gates. Previously no lower bound for Parity in this setting could handle more than log(n) gates. This result also implies subexponential-time learning algorithms for AC^0 with n^{o(1)} threshold gates under the uniform distribution. In addition, we give almost optimal bounds for the number of gates in a depth-d threshold circuit computing Parity on average, and show average-case lower bounds for threshold formulas ofany depth. Our techniques include adaptive random restrictions, anti-concentration and the structural theory of linear threshold functions, and bounded-read Chernoff bounds.

Cite as

Ruiwen Chen, Rahul Santhanam, and Srikanth Srinivasan. Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 1:1-1:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2016.1,
  author =	{Chen, Ruiwen and Santhanam, Rahul and Srinivasan, Srikanth},
  title =	{{Average-Case Lower Bounds and Satisfiability Algorithms for Small Threshold Circuits}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{1:1--1:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.1},
  URN =		{urn:nbn:de:0030-drops-58447},
  doi =		{10.4230/LIPIcs.CCC.2016.1},
  annote =	{Keywords: threshold circuit, satisfiability algorithm, circuit lower bound}
}
Document
Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation

Authors: Richard Ryan Williams


Abstract
We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit C(x_1,...,x_n) of size s and degree d over a field F, and any inputs a_1,...,a_K in F}^n, - the Prover sends the Verifier the values C(a_1), ..., C(a_K) in F and a proof of ~O(K * d) length, and - the Verifier tosses poly(log(dK|F|epsilon)) coins and can check the proof in about ~O}(K * (n + d) + s) time, with probability of error less than epsilon. For small degree d, this "Merlin-Arthur" proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in c^{n} time (for various c < 2) for the Permanent, #Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of 0-1 linear programs. In general, the value of any polynomial in Valiant's class VP can be certified faster than "exhaustive summation" over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others. We also give a three-round (AMA) proof system for quantified Boolean formulas running in 2^{2n/3+o(n)} time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in n^{k/2+O(1)}-time for counting k-cliques in graphs. We point to some potential future directions for refuting the Nondeterministic Strong ETH.

Cite as

Richard Ryan Williams. Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.CCC.2016.2,
  author =	{Williams, Richard Ryan},
  title =	{{Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.2},
  URN =		{urn:nbn:de:0030-drops-58307},
  doi =		{10.4230/LIPIcs.CCC.2016.2},
  annote =	{Keywords: counting complexity, exponential-time hypothesis, interactive proofs, Merlin-Arthur games}
}
Document
Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity

Authors: Irit Dinur and Or Meir


Abstract
One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de-Morgan formulas. Karchmer, Raz, and Wigderson suggested to approach this problem by proving that formula complexity behaves "as expected" with respect to the composition of functions f * g. They showed that this conjecture, if proved, would imply super-polynomial formula lower bounds. The first step toward proving the KRW conjecture was made by Edmonds et al., who proved an analogue of the conjecture for the composition of "universal relations". In this work, we extend the argument of Edmonds et al. further to f * g where f is an arbitrary function and g is the parity function. While this special case of the KRW conjecture was already proved implicitly in Hastad's work on random restrictions, our proof seems more likely to be generalizable to other cases of the conjecture. In particular, our proof uses an entirely different approach, based on communication complexity technique of Karchmer and Wigderson. In addition, our proof gives a new structural result, which roughly says that the naive way for computing f * g is the only optimal way. Along the way, we obtain a new proof of the state-of-the-art formula lower bound of n^{3-o(1)} due to Hastad.

Cite as

Irit Dinur and Or Meir. Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 3:1-3:51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.CCC.2016.3,
  author =	{Dinur, Irit and Meir, Or},
  title =	{{Toward the KRW Composition Conjecture: Cubic Formula Lower Bounds via Communication Complexity}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{3:1--3:51},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.3},
  URN =		{urn:nbn:de:0030-drops-58412},
  doi =		{10.4230/LIPIcs.CCC.2016.3},
  annote =	{Keywords: Formula lower bounds, communication complexity, Karchmer-Wigderson games, KRW composition conjecture}
}
Document
Nearly Optimal Separations Between Communication (or Query) Complexity and Partitions

Authors: Andris Ambainis, Martins Kokainis, and Robin Kothari


Abstract
We show a nearly quadratic separation between deterministic communication complexity and the logarithm of the partition number, which is essentially optimal. This improves upon a recent power 1.5 separation of Göös, Pitassi, and Watson (FOCS 2015). In query complexity, we establish a nearly quadratic separation between deterministic (and even randomized) query complexity and subcube partition complexity, which is also essentially optimal. We also establish a nearly power 1.5 separation between quantum query complexity and subcube partition complexity, the first superlinear separation between the two measures. Lastly, we show a quadratic separation between quantum query complexity and one-sided subcube partition complexity. Our query complexity separations use the recent cheat sheet framework of Aaronson, Ben-David, and Kothari. Our query functions are built up in stages by alternating function composition with the cheat sheet construction. The communication complexity separation follows from "lifting" the query separation to communication complexity.

Cite as

Andris Ambainis, Martins Kokainis, and Robin Kothari. Nearly Optimal Separations Between Communication (or Query) Complexity and Partitions. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ambainis_et_al:LIPIcs.CCC.2016.4,
  author =	{Ambainis, Andris and Kokainis, Martins and Kothari, Robin},
  title =	{{Nearly Optimal Separations Between Communication (or Query) Complexity and Partitions}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.4},
  URN =		{urn:nbn:de:0030-drops-58471},
  doi =		{10.4230/LIPIcs.CCC.2016.4},
  annote =	{Keywords: Query Complexity, Communication Complexity, Subcube Partition Complexity, Partition Bound}
}
Document
A Composition Theorem for Conical Juntas

Authors: Mika Göös and T. S. Jayram


Abstract
We describe a general method of proving degree lower bounds for conical juntas (nonnegative combinations of conjunctions) that compute recursively defined boolean functions. Such lower bounds are known to carry over to communication complexity. We give two applications: - AND-OR trees. We show a near-optimal ~Omega(n^{0.753...}) randomised communication lower bound for the recursive NAND function (a.k.a. AND-OR tree). This answers an open question posed by Beame and Lawry. - Majority trees. We show an Omega(2.59^k) randomised communication lower bound for the 3-majority tree of height k. This improves over the state-of-the-art already in the context of randomised decision tree complexity.

Cite as

Mika Göös and T. S. Jayram. A Composition Theorem for Conical Juntas. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 5:1-5:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.CCC.2016.5,
  author =	{G\"{o}\"{o}s, Mika and Jayram, T. S.},
  title =	{{A Composition Theorem for Conical Juntas}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{5:1--5:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.5},
  URN =		{urn:nbn:de:0030-drops-58497},
  doi =		{10.4230/LIPIcs.CCC.2016.5},
  annote =	{Keywords: Composition theorems, conical juntas}
}
Document
Tight Bounds for Communication-Assisted Agreement Distillation

Authors: Venkatesan Guruswami and Jaikumar Radhakrishnan


Abstract
Suppose Alice holds a uniformly random string X in {0,1}^N and Bob holds a noisy version Y of X where each bit of X is flipped independently with probability epsilon in [0,1/2]. Alice and Bob would like to extract a common random string of min-entropy at least k. In this work, we establish the communication versus success probability trade-off for this problem by giving a protocol and a matching lower bound (under the restriction that the string to be agreed upon is determined by Alice's input X). Specifically, we prove that in order for Alice and Bob to agree on a common string with probability 2^{-gamma k} (gamma k >= 1), the optimal communication (up to o(k) terms, and achievable for large N) is precisely (C *(1-gamma) - 2 * sqrt{ C * (1-C) gamma}) * k, where C := 4 * epsilon * (1-epsilon). In particular, the optimal communication to achieve Omega(1) agreement probability approaches 4 * epsilon * (1-epsilon) * k. We also consider the case when Y is the output of the binary erasure channel on X, where each bit of Y equals the corresponding bit of X with probability 1-epsilon and is otherwise erased (that is, replaced by a "?"). In this case, the communication required becomes (epsilon * (1-gamma) - 2 * sqrt{ epsilon * (1-epsilon) * gamma}) * k. In particular, the optimal communication to achieve Omega(1) agreement probability approaches epsilon * k, and with no communication the optimal agreement probability approaches 2^{- (1-sqrt{1-epsilon})/(1+sqrt{1-epsilon}) * k}. Our protocols are based on covering codes and extend the approach of (Bogdanov and Mossel, 2011) for the zero-communication case. Our lower bounds rely on hypercontractive inequalities. For the model of bit-flips, our argument extends the approach of (Bogdanov and Mossel, 2011) by allowing communication; for the erasure model, to the best of our knowledge the needed hypercontractivity statement was not studied before, and it was established (given our application) by (Nair and Wang 2015). We also obtain information complexity lower bounds for these tasks, and together with our protocol, they shed light on the recently popular "most informative Boolean function" conjecture of Courtade and Kumar.

Cite as

Venkatesan Guruswami and Jaikumar Radhakrishnan. Tight Bounds for Communication-Assisted Agreement Distillation. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2016.6,
  author =	{Guruswami, Venkatesan and Radhakrishnan, Jaikumar},
  title =	{{Tight Bounds for Communication-Assisted Agreement Distillation}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{6:1--6:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.6},
  URN =		{urn:nbn:de:0030-drops-58450},
  doi =		{10.4230/LIPIcs.CCC.2016.6},
  annote =	{Keywords: communication complexity, covering codes, hypercontractivity, information theory, lower bounds, pseudorandomness}
}
Document
New Extractors for Interleaved Sources

Authors: Eshan Chattopadhyay and David Zuckerman


Abstract
We study how to extract randomness from a C-interleaved source, that is, a source comprised of C independent sources whose bits or symbols are interleaved. We describe a simple approach for constructing such extractors that yields: (1) For some delta>0, c>0, explicit extractors for 2-interleaved sources on {0,1}^{2n} when one source has min-entropy at least (1-delta)*n and the other has min-entropy at least c*log(n). The best previous construction, by Raz and Yehudayoff, worked only when both sources had entropy rate 1-delta. (2) For some c>0 and any large enough prime p, explicit extractors for 2-interleaved sources on [p]^{2n} when one source has min-entropy rate at least .51 and the other source has min-entropy rate at least (c*log(n))/n. We use these to obtain the following applications: (a) We introduce the class of any-order-small-space sources, generalizing the class of small-space sources studied by Kamp et al.. We construct extractors for such sources with min-entropy rate close to 1/2. Using the Raz-Yehudayoff construction would require entropy rate close to 1. (b) For any large enough prime p, we exhibit an explicit function f:[p]^{2n} -> {0,1} such that the randomized best-partition communication complexity of f with error 1/2-2^{-Omega(n)} is at least .24*n*log(p). Previously this was known only for a tiny constant instead of .24, for p=2 by by Raz and Yehudayoff. We introduce non-malleable extractors in the interleaved model. For any large enough prime p, we give an explicit construction of a weak-seeded non-malleable extractor for sources over [p]^n with min-entropy rate .51. Nothing was known previously, even for almost full min-entropy.

Cite as

Eshan Chattopadhyay and David Zuckerman. New Extractors for Interleaved Sources. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 7:1-7:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2016.7,
  author =	{Chattopadhyay, Eshan and Zuckerman, David},
  title =	{{New Extractors for Interleaved Sources}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{7:1--7:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.7},
  URN =		{urn:nbn:de:0030-drops-58513},
  doi =		{10.4230/LIPIcs.CCC.2016.7},
  annote =	{Keywords: extractor, derandomization, explicit construction}
}
Document
Non-Malleable Extractors - New Tools and Improved Constructions

Authors: Gil Cohen


Abstract
A non-malleable extractor is a seeded extractor with a very strong guarantee - the output of a non-malleable extractor obtained using a typical seed is close to uniform even conditioned on the output obtained using any other seed. The first contribution of this paper consists of two new and improved constructions of non-malleable extractors: - We construct a non-malleable extractor with seed-length O(log(n) * log(log(n))) that works for entropy Omega(log(n)). This improves upon a recent exciting construction by Chattopadhyay, Goyal, and Li (STOC'16) that has seed length O(log^{2}(n)) and requires entropy Omega(log^{2}(n)). - Secondly, we construct a non-malleable extractor with optimal seed length O(log(n)) for entropy n/log^{O(1)}(n). Prior to this construction, non-malleable extractors with a logarithmic seed length, due to Li (FOCS'12), required entropy 0.49*n. Even non-malleable condensers with seed length O(log(n)), by Li (STOC'12), could only support linear entropy. We further devise several tools for enhancing a given non-malleable extractor in a black-box manner. One such tool is an algorithm that reduces the entropy requirement of a non-malleable extractor at the expense of a slightly longer seed. A second algorithm increases the output length of a non-malleable extractor from constant to linear in the entropy of the source. We also devise an algorithm that transforms a non-malleable extractor to the so-called t-non-malleable extractor for any desired t. Besides being useful building blocks for our constructions, we consider these modular tools to be of independent interest.

Cite as

Gil Cohen. Non-Malleable Extractors - New Tools and Improved Constructions. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 8:1-8:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{cohen:LIPIcs.CCC.2016.8,
  author =	{Cohen, Gil},
  title =	{{Non-Malleable Extractors - New Tools and Improved Constructions}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{8:1--8:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.8},
  URN =		{urn:nbn:de:0030-drops-58348},
  doi =		{10.4230/LIPIcs.CCC.2016.8},
  annote =	{Keywords: extractors, non-malleable, explicit constructions}
}
Document
Pseudorandomness When the Odds are Against You

Authors: Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel


Abstract
Impagliazzo and Wigderson (STOC 1997) showed that if E=DTIME(2^O(n)) requires size 2^Omega(n) circuits, then every time T constant-error randomized algorithm can be simulated deterministically in time poly(T). However, such polynomial slowdown is a deal breaker when T=2^(alpha*n), for a constant alpha>0, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak (STOC 2010) observed that many such algorithms are obtained from randomized time T algorithms, for T < 2^o(n), with large one-sided error 1-epsilon, for epsilon=2^(-alpha*n), that are repeated 1/epsilon times to yield a constant-error randomized algorithm running in time T/epsilon=2^((alpha+o(1))*n). We show that if E requires size 2^Omega(n) nondeterministic circuits, then there is a poly(n)-time epsilon-HSG (Hitting-Set Generator) H:{0,1}^(O(log(n)) + log(1/epsilon) -> {0,1}^n, implying that time T randomized algorithms with one-sided error 1-epsilon can be simulated in deterministic time poly(T)/epsilon. In particular, under this hardness assumption, the fastest known constant-error randomized algorithm for k-SAT (for k > 3) by Paturi et al. (J. ACM 2005) can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with "few" nondeterministic bits. Applebaum et al. (CCC 2015) showed that "black-box techniques" cannot achieve poly(n)-time computable epsilon-PRGs (Pseudo-Random Generators) for epsilon=n^-omega(1), even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with relative error, that do follow under the latter hardness assumption. Specifically, we say that a function G:{0,1}^r -> {0,1}^n is an (epsilon,delta)-re-PRG for a circuit C if (1-epsilon)*Pr[C(U_n)=1] - delta < Pr[C(G(U_r)=1] < (1+epsilon)*Pr[C(U_n)=1] + delta. We construct poly(n)-time computable (epsilon,delta)-re-PRGs with arbitrary polynomial stretch, epsilon=n^-O(1) and delta=2^(-n^Omega(1)). We also construct PRGs with relative error that fool non-boolean distinguishers (in the sense introduced by Dubrov and Ishai (STOC 2006)). Our techniques use ideas from Paturi and Pudlak (STOC 2010), Trevisan and Vadhan (FOCS 2000), Applebaum et al. (CCC 2015). Common themes in our proofs are "composing" a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund (Comp. Complexity 1997).

Cite as

Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel. Pseudorandomness When the Odds are Against You. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 9:1-9:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{artemenko_et_al:LIPIcs.CCC.2016.9,
  author =	{Artemenko, Sergei and Impagliazzo, Russell and Kabanets, Valentine and Shaltiel, Ronen},
  title =	{{Pseudorandomness When the Odds are Against You}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{9:1--9:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.9},
  URN =		{urn:nbn:de:0030-drops-58375},
  doi =		{10.4230/LIPIcs.CCC.2016.9},
  annote =	{Keywords: Derandomization, pseudorandom generator, hitting-set generator, relative error}
}
Document
Learning Algorithms from Natural Proofs

Authors: Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets, and Antonina Kolokolova


Abstract
Based on Hastad's (1986) circuit lower bounds, Linial, Mansour, and Nisan (1993) gave a quasipolytime learning algorithm for AC^0 (constant-depth circuits with AND, OR, and NOT gates), in the PAC model over the uniform distribution. It was an open question to get a learning algorithm (of any kind) for the class of AC^0[p] circuits (constant-depth, with AND, OR, NOT, and MOD_p gates for a prime p). Our main result is a quasipolytime learning algorithm for AC^0[p] in the PAC model over the uniform distribution with membership queries. This algorithm is an application of a general connection we show to hold between natural proofs (in the sense of Razborov and Rudich (1997)) and learning algorithms. We argue that a natural proof of a circuit lower bound against any (sufficiently powerful) circuit class yields a learning algorithm for the same circuit class. As the lower bounds against AC^0[p] by Razborov (1987) and Smolensky (1987) are natural, we obtain our learning algorithm for AC^0[p].

Cite as

Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets, and Antonina Kolokolova. Learning Algorithms from Natural Proofs. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{carmosino_et_al:LIPIcs.CCC.2016.10,
  author =	{Carmosino, Marco L. and Impagliazzo, Russell and Kabanets, Valentine and Kolokolova, Antonina},
  title =	{{Learning Algorithms from Natural Proofs}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.10},
  URN =		{urn:nbn:de:0030-drops-58557},
  doi =		{10.4230/LIPIcs.CCC.2016.10},
  annote =	{Keywords: natural proofs, circuit complexity, lower bounds, learning, compression}
}
Document
Decoding Reed-Muller Codes Over Product Sets

Authors: John Y. Kim and Swastik Kopparty


Abstract
We give a polynomial time algorithm to decode multivariate polynomial codes of degree d up to half their minimum distance, when the evaluation points are an arbitrary product set S^m, for every d < |S|. Previously known algorithms could achieve this only if the set S has some very special algebraic structure, or if the degree d is significantly smaller than |S|. We also give a near-linear time algorithm, which is based on tools from list-decoding, to decode these codes from nearly half their minimum distance, provided d < (1-epsilon)|S| for constant epsilon > 0. Our result gives an m-dimensional generalization of the well known decoding algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic version of the Schwartz-Zippel lemma.

Cite as

John Y. Kim and Swastik Kopparty. Decoding Reed-Muller Codes Over Product Sets. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 11:1-11:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.CCC.2016.11,
  author =	{Kim, John Y. and Kopparty, Swastik},
  title =	{{Decoding Reed-Muller Codes Over Product Sets}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{11:1--11:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.11},
  URN =		{urn:nbn:de:0030-drops-58352},
  doi =		{10.4230/LIPIcs.CCC.2016.11},
  annote =	{Keywords: polynomial codes, Reed-Muller codes, coding theory, error-correcting codes}
}
Document
Lower Bounds for Constant Query Affine-Invariant LCCs and LTCs

Authors: Arnab Bhattacharyya and Sivakanth Gopi


Abstract
Affine-invariant codes are codes whose coordinates form a vector space over a finite field and which are invariant under affine transformations of the coordinate space. They form a natural, well-studied class of codes; they include popular codes such as Reed-Muller and Reed-Solomon. A particularly appealing feature of affine-invariant codes is that they seem well-suited to admit local correctors and testers. In this work, we give lower bounds on the length of locally correctable and locally testable affine-invariant codes with constant query complexity. We show that if a code C subset Sigma^{K^n} is an r-query locally correctable code (LCC), where K is a finite field and Sigma is a finite alphabet, then the number of codewords in C is at most exp(O_{K, r, |Sigma|}(n^{r-1})). Also, we show that if C subset Sigma^{K^n} is an r-query locally testable code (LTC), then the number of codewords in C is at most \exp(O_{K, r, |Sigma|}(n^{r-2})). The dependence on n in these bounds is tight for constant-query LCCs/LTCs, since Guo, Kopparty and Sudan (ITCS 2013) construct affine-invariant codes via lifting that have the same asymptotic tradeoffs. Note that our result holds for non-linear codes, whereas previously, Ben-Sasson and Sudan (RANDOM 2011) assumed linearity to derive similar results. Our analysis uses higher-order Fourier analysis. In particular, we show that the codewords corresponding to an affine-invariant LCC/LTC must be far from each other with respect to Gowers norm of an appropriate order. This then allows us to bound the number of codewords, using known decomposition theorems which approximate any bounded function in terms of a finite number of low-degree non-classical polynomials, upto a small error in the Gowers norm.

Cite as

Arnab Bhattacharyya and Sivakanth Gopi. Lower Bounds for Constant Query Affine-Invariant LCCs and LTCs. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.CCC.2016.12,
  author =	{Bhattacharyya, Arnab and Gopi, Sivakanth},
  title =	{{Lower Bounds for Constant Query Affine-Invariant LCCs and LTCs}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.12},
  URN =		{urn:nbn:de:0030-drops-58400},
  doi =		{10.4230/LIPIcs.CCC.2016.12},
  annote =	{Keywords: Locally correctable code, Locally testable code, Affine Invariance, Gowers uniformity norm}
}
Document
Degree and Sensitivity: Tails of Two Distributions

Authors: Parikshit Gopalan, Rocco A. Servedio, and Avi Wigderson


Abstract
The sensitivity of a Boolean function f is the maximum, over all inputs x, of the number of sensitive coordinates of x (namely the number of Hamming neighbors of x with different f-value). The well-known sensitivity conjecture of Nisan (see also Nisan and Szegedy) states that every sensitivity-s Boolean function can be computed by a polynomial over the reals of degree s^{O(1)}. The best known upper bounds on degree, however, are exponential rather than polynomial in s. Our main result is an approximate version of the conjecture: every Boolean function with sensitivity s can be eps-approximated (in l_2) by a polynomial whose degree is s * polylog(1/eps). This is the first improvement on the folklore bound of s/eps. We prove this via a new "switching lemma for low-sensitivity functions" which establishes that a random restriction of a low-sensitivity function is very likely to have low decision tree depth. This is analogous to the well-known switching lemma for AC^0 circuits. Our proof analyzes the combinatorial structure of the graph G_f of sensitive edges of a Boolean function f. Understanding the structure of this graph is of independent interest as a means of understanding Boolean functions. We propose several new complexity measures for Boolean functions based on this graph, including tree sensitivity and component dimension, which may be viewed as relaxations of worst-case sensitivity, and we introduce some new techniques, such as proper walks and shifting, to analyze these measures. We use these notions to show that the graph of a function of full degree must be sufficiently complex, and that random restrictions of low-sensitivity functions are unlikely to lead to such complex graphs. We postulate a robust analogue of the sensitivity conjecture: if most inputs to a Boolean function f have low sensitivity, then most of the Fourier mass of f is concentrated on small subsets. We prove a lower bound on tree sensitivity in terms of decision tree depth, and show that a polynomial strengthening of this lower bound implies the robust conjecture. We feel that studying the graph G_f is interesting in its own right, and we hope that some of the notions and techniques we introduce in this work will be of use in its further study.

Cite as

Parikshit Gopalan, Rocco A. Servedio, and Avi Wigderson. Degree and Sensitivity: Tails of Two Distributions. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 13:1-13:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.CCC.2016.13,
  author =	{Gopalan, Parikshit and Servedio, Rocco A. and Wigderson, Avi},
  title =	{{Degree and Sensitivity: Tails of Two Distributions}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{13:1--13:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.13},
  URN =		{urn:nbn:de:0030-drops-58488},
  doi =		{10.4230/LIPIcs.CCC.2016.13},
  annote =	{Keywords: Boolean functions, random restrictions, Fourier analysis}
}
Document
New Hardness Results for Graph and Hypergraph Colorings

Authors: Joshua Brakensiek and Venkatesan Guruswami


Abstract
Finding a proper coloring of a t-colorable graph G with t colors is a classic NP-hard problem when t >= 3. In this work, we investigate the approximate coloring problem in which the objective is to find a proper c-coloring of G where c >= t. We show that for all t >= 3, it is NP-hard to find a c-coloring when c <= 2t-2. In the regime where t is small, this improves, via a unified approach, the previously best known hardness result of c <= max{2t- 5, t + 2*floor(t/3) - 1} (Garey and Johnson 1976; Khanna, Linial, Safra, 1993; Guruswami, Khanna, 2000). For example, we show that 6-coloring a 4-colorable graph is NP-hard, improving on the NP-hardness of 5-coloring a 4-colorable graph. We also generalize this to related problems on the strong coloring of hypergraphs. A k-uniform hypergraph H is t-strong colorable (where t >= k) if there is a t-coloring of the vertices such that no two vertices in each hyperedge of H have the same color. We show that if t = ceiling(3k/2), then it is NP-hard to find a 2-coloring of the vertices of H such that no hyperedge is monochromatic. We conjecture that a similar hardness holds for t=k+1. We establish the NP-hardness of these problems by reducing from the hardness of the Label Cover problem, via a "dictatorship test" gadget graph. By combinatorially classifying all possible colorings of this graph, we can infer labels to provide to the label cover problem. This approach generalizes the "weak polymorphism" framework of (Austrin, Guruswami, Hastad, 2014), though interestingly our results are "PCP-free" in that they do not require any approximation gap in the starting Label Cover instance.

Cite as

Joshua Brakensiek and Venkatesan Guruswami. New Hardness Results for Graph and Hypergraph Colorings. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 14:1-14:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{brakensiek_et_al:LIPIcs.CCC.2016.14,
  author =	{Brakensiek, Joshua and Guruswami, Venkatesan},
  title =	{{New Hardness Results for Graph and Hypergraph Colorings}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{14:1--14:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.14},
  URN =		{urn:nbn:de:0030-drops-58291},
  doi =		{10.4230/LIPIcs.CCC.2016.14},
  annote =	{Keywords: hardness of approximation, graph coloring, hypergraph coloring, polymor- phisms, combinatorics}
}
Document
Invariance Principle on the Slice

Authors: Yuval Filmus, Guy Kindler, Elchanan Mossel, and Karl Wimmer


Abstract
We prove a non-linear invariance principle for the slice. As applications, we prove versions of Majority is Stablest, Bourgain's tail theorem, and the Kindler-Safra theorem for the slice. From the latter we deduce a stability version of the t-intersecting Erdos-Ko-Rado theorem.

Cite as

Yuval Filmus, Guy Kindler, Elchanan Mossel, and Karl Wimmer. Invariance Principle on the Slice. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 15:1-15:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.CCC.2016.15,
  author =	{Filmus, Yuval and Kindler, Guy and Mossel, Elchanan and Wimmer, Karl},
  title =	{{Invariance Principle on the Slice}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{15:1--15:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.15},
  URN =		{urn:nbn:de:0030-drops-58236},
  doi =		{10.4230/LIPIcs.CCC.2016.15},
  annote =	{Keywords: analysis of boolean functions, invariance principle, Johnson association scheme, the slice}
}
Document
Harmonicity and Invariance on Slices of the Boolean Cube

Authors: Yuval Filmus and Elchanan Mossel


Abstract
In a recent work with Kindler and Wimmer we proved an invariance principle for the slice for low-influence, low-degree functions. Here we provide an alternative proof for general low-degree functions, with no constraints on the influences. We show that any real-valued function on the slice, whose degree when written as a harmonic multi-linear polynomial is o(sqrt(n)), has approximately the same distribution under the slice and cube measure. Our proof is based on a novel decomposition of random increasing paths in the cube in terms of martingales and reverse martingales. While such decompositions have been used in the past for stationary reversible Markov chains, ours decomposition is applied in a non-reversible non-stationary setup. We also provide simple proofs for some known and some new properties of harmonic functions which are crucial for the proof. Finally, we provide independent simple proofs for the known facts that 1) one cannot distinguish between the slice and the cube based on functions of little of of n coordinates and 2) Boolean symmetric functions on the cube cannot be approximated under the uniform measure by functions whose sum of influences is o(sqrt(n)).

Cite as

Yuval Filmus and Elchanan Mossel. Harmonicity and Invariance on Slices of the Boolean Cube. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.CCC.2016.16,
  author =	{Filmus, Yuval and Mossel, Elchanan},
  title =	{{Harmonicity and Invariance on Slices of the Boolean Cube}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.16},
  URN =		{urn:nbn:de:0030-drops-58240},
  doi =		{10.4230/LIPIcs.CCC.2016.16},
  annote =	{Keywords: analysis of boolean functions, invariance principle, Johnson association scheme, the slice}
}
Document
On the Sum-of-Squares Degree of Symmetric Quadratic Functions

Authors: Troy Lee, Anupam Prakash, Ronald de Wolf, and Henry Yuen


Abstract
We study how well functions over the boolean hypercube of the form f_k(x)=(|x|-k)(|x|-k-1) can be approximated by sums of squares of low-degree polynomials, obtaining good bounds for the case of approximation in l_{infinity}-norm as well as in l_1-norm. We describe three complexity-theoretic applications: (1) a proof that the recent breakthrough lower bound of Lee, Raghavendra, and Steurer [Lee/Raghavendra/Steurer, STOC 2015] on the positive semidefinite extension complexity of the correlation and TSP polytopes cannot be improved further by showing better sum-of-squares degree lower bounds on l_1-approximation of f_k; (2) a proof that Grigoriev's lower bound on the degree of Positivstellensatz refutations for the knapsack problem is optimal, answering an open question from [Grigoriev, Comp. Compl. 2001]; (3) bounds on the query complexity of quantum algorithms whose expected output approximates such functions.

Cite as

Troy Lee, Anupam Prakash, Ronald de Wolf, and Henry Yuen. On the Sum-of-Squares Degree of Symmetric Quadratic Functions. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 17:1-17:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.CCC.2016.17,
  author =	{Lee, Troy and Prakash, Anupam and de Wolf, Ronald and Yuen, Henry},
  title =	{{On the Sum-of-Squares Degree of Symmetric Quadratic Functions}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{17:1--17:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.17},
  URN =		{urn:nbn:de:0030-drops-58383},
  doi =		{10.4230/LIPIcs.CCC.2016.17},
  annote =	{Keywords: Sum-of-squares degree, approximation theory, Positivstellensatz refutations of knapsack, quantum query complexity in expectation, extension complexity}
}
Document
Limits of Minimum Circuit Size Problem as Oracle

Authors: Shuichi Hirahara and Osamu Watanabe


Abstract
The Minimum Circuit Size Problem (MCSP) is known to be hard for statistical zero knowledge via a BPP-Turing reduction (Allender and Das, 2014), whereas establishing NP-hardness of MCSP via a polynomial-time many-one reduction is difficult (Murray and Williams, 2015) in the sense that it implies ZPP != EXP, which is a major open problem in computational complexity. In this paper, we provide strong evidence that current techniques cannot establish NP-hardness of MCSP, even under polynomial-time Turing reductions or randomized reductions: Specifically, we introduce the notion of oracle-independent reduction to MCSP, which captures all the currently known reductions. We say that a reduction to MCSP is oracle-independent if the reduction can be generalized to a reduction to MCSP^A for any oracle A, where MCSP^A denotes an oracle version of MCSP. We prove that no language outside P is reducible to MCSP via an oracle-independent polynomial-time Turing reduction. We also show that the class of languages reducible to MCSP via an oracle-independent randomized reduction that makes at most one query is contained in AM intersect coAM. Thus, NP-hardness of MCSP cannot be established via such oracle-independent reductions unless the polynomial hierarchy collapses. We also extend the previous results to the case of more general reductions: We prove that establishing NP-hardness of MCSP via a polynomial-time nonadaptive reduction implies ZPP != EXP, and that establishing NP-hardness of approximating circuit complexity via a polynomial-time Turing reduction also implies ZPP != EXP. Along the way, we prove that approximating Levin's Kolmogorov complexity is provably not EXP-hard under polynomial-time Turing reductions, which is of independent interest.

Cite as

Shuichi Hirahara and Osamu Watanabe. Limits of Minimum Circuit Size Problem as Oracle. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 18:1-18:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2016.18,
  author =	{Hirahara, Shuichi and Watanabe, Osamu},
  title =	{{Limits of Minimum Circuit Size Problem as Oracle}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{18:1--18:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.18},
  URN =		{urn:nbn:de:0030-drops-58426},
  doi =		{10.4230/LIPIcs.CCC.2016.18},
  annote =	{Keywords: minimum circuit size problem, NP-completeness, randomized reductions, resource-bounded Kolmogorov complexity, Turing reductions}
}
Document
New Non-Uniform Lower Bounds for Uniform Classes

Authors: Lance Fortnow and Rahul Santhanam


Abstract
We strengthen the nondeterministic hierarchy theorem for non-deterministic polynomial time to show that the lower bound holds against sub-linear advice. More formally, we show that for any constants d and d' such that 1 <= d < d', and for any time-constructible bound t=o(n^d), there is a language in NTIME(n^d) which is not in NTIME(t)/n^{1/d'}. The best known earlier separation of Fortnow, Santhanam and Trevisan could only handle o(log(n)) bits of advice in the lower bound, and was not tight with respect to the time bounds. We generalize our hierarchy theorem to work for other syntactic complexity measures between polynomial time and polynomial space, including alternating polynomial time with any fixed number of alternations. We also use our technique to derive an almost-everywhere hierarchy theorem for non-deterministic classes which use a sub-linear amount of non-determinism, i.e., the lower bound holds on all but finitely many input lengths rather than just on infinitely many. As one application of our main result, we derive a new lower bound for NP against NP-uniform non-deterministic circuits of size O(n^k) for any fixed k. This result is a significant strengthening of a result of Kannan, which states that not all of NP can be solved with P-uniform circuits of size O(n^k) for any fixed k. As another application, we show strong non-uniform lower bounds for the complexity class RE of languages decidable in randomized linear exponential time with one sided error.

Cite as

Lance Fortnow and Rahul Santhanam. New Non-Uniform Lower Bounds for Uniform Classes. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fortnow_et_al:LIPIcs.CCC.2016.19,
  author =	{Fortnow, Lance and Santhanam, Rahul},
  title =	{{New Non-Uniform Lower Bounds for Uniform Classes}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.19},
  URN =		{urn:nbn:de:0030-drops-58503},
  doi =		{10.4230/LIPIcs.CCC.2016.19},
  annote =	{Keywords: Computational complexity, nondeterminism, nonuniform complexity}
}
Document
New Characterizations in Turnstile Streams with Applications

Authors: Yuqing Ai, Wei Hu, Yi Li, and David P. Woodruff


Abstract
Recently, [Li, Nguyen, Woodruff, STOC 2014] showed any 1-pass constant probability streaming algorithm for computing a relation f on a vector x in {-m, -(m-1), ..., m}^n presented in the turnstile data stream model can be implemented by maintaining a linear sketch Ax mod q, where A is an r times n integer matrix and q = (q_1, ..., q_r) is a vector of positive integers. The space complexity of maintaining Ax mod q, not including the random bits used for sampling A and q, matches the space of the optimal algorithm. We give multiple strengthenings of this reduction, together with new applications. In particular, we show how to remove the following shortcomings of their reduction: 1. The Box Constraint. Their reduction applies only to algorithms that must be correct even if x_{infinity} = max_{i in [n]} |x_i| is allowed to be much larger than m at intermediate points in the stream, provided that x is in {-m, -(m-1), ..., m}^n at the end of the stream. We give a condition under which the optimal algorithm is a linear sketch even if it works only when promised that x is in {-m, -(m-1), ..., m}^n at all points in the stream. Using this, we show the first super-constant Omega(log m) bits lower bound for the problem of maintaining a counter up to an additive epsilon*m error in a turnstile stream, where epsilon is any constant in (0, 1/2). Previous lower bounds are based on communication complexity and are only for relative error approximation; interestingly, we do not know how to prove our result using communication complexity. More generally, we show the first super-constant Omega(log(m)) lower bound for additive approximation of l_p-norms; this bound is tight for p in [1, 2]. 2. Negative Coordinates. Their reduction allows x_i to be negative while processing the stream. We show an equivalence between 1-pass algorithms and linear sketches Ax mod q in dynamic graph streams, or more generally, the strict turnstile model, in which for all i in [n], x_i is nonnegative at all points in the stream. Combined with [Assadi, Khanna, Li, Yaroslavtsev, SODA 2016], this resolves the 1-pass space complexity of approximating the maximum matching in a dynamic graph stream, answering a question in that work. 3. 1-Pass Restriction. Their reduction only applies to 1-pass data stream algorithms in the turnstile model, while there exist algorithms for heavy hitters and for low rank approximation which provably do better with multiple passes. We extend the reduction to algorithms which make any number of passes, showing the optimal algorithm is to choose a new linear sketch at the beginning of each pass, based on the output of previous passes.

Cite as

Yuqing Ai, Wei Hu, Yi Li, and David P. Woodruff. New Characterizations in Turnstile Streams with Applications. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 20:1-20:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ai_et_al:LIPIcs.CCC.2016.20,
  author =	{Ai, Yuqing and Hu, Wei and Li, Yi and Woodruff, David P.},
  title =	{{New Characterizations in Turnstile Streams with Applications}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{20:1--20:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.20},
  URN =		{urn:nbn:de:0030-drops-58337},
  doi =		{10.4230/LIPIcs.CCC.2016.20},
  annote =	{Keywords: communication complexity, data streams, dynamic graph streams, norm estimation}
}
Document
Invited Talk
Evolution and Computation (Invited Talk)

Authors: Nisheeth K. Vishnoi


Abstract
Over the last two centuries there have been tremendous scientific and mathematical advances in our understanding of evolution, life and its mysteries. Recently, the relatively new and powerful tool of computation has joined forces to develop this understanding further: the underlying tenet is that several natural processes, including evolution itself, can be viewed as computing or optimizing something - evolution is computation. Furthermore, as in computation, efficiency is an important consideration in evolution. As many of these evolutionary processes are described using the language of dynamical systems, this entails understanding how quickly such systems can attain their equilibria. This endeavor not only has the potential to give us fundamental insights into life, it holds the promise that we will unveil new computational models and techniques. In this talk we will see some vignettes of this interplay between evolution and computation.

Cite as

Nisheeth K. Vishnoi. Evolution and Computation (Invited Talk). In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, p. 21:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{vishnoi:LIPIcs.CCC.2016.21,
  author =	{Vishnoi, Nisheeth K.},
  title =	{{Evolution and Computation}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{21:1--21:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.21},
  URN =		{urn:nbn:de:0030-drops-58576},
  doi =		{10.4230/LIPIcs.CCC.2016.21},
  annote =	{Keywords: Evolution, Dynamical Systems, Algorithms, Complexity}
}
Document
Tight SoS-Degree Bounds for Approximate Nash Equilibria

Authors: Aram Harrow, Anand V. Natarajan, and Xiaodi Wu


Abstract
Nash equilibria always exist, but are widely conjectured to require time to find that is exponential in the number of strategies, even for two-player games. By contrast, a simple quasi-polynomial time algorithm, due to Lipton, Markakis and Mehta (LMM), can find approximate Nash equilibria, in which no player can improve their utility by more than epsilon by changing their strategy. The LMM algorithm can also be used to find an approximate Nash equilibrium with near-maximal total welfare. Matching hardness results for this optimization problem re found assuming the hardness of the planted-clique problem (by Hazan and Krauthgamer) and assuming the Exponential Time Hypothesis (by Braverman, Ko and Weinstein). In this paper we consider the application of the sum-squares (SoS) algorithm from convex optimization to the problem of optimizing over Nash equilibria. We show the first unconditional lower bounds on the number of levels of SoS needed to achieve a constant factor approximation to this problem. While it may seem that Nash equilibria do not naturally lend themselves to convex optimization, we also describe a simple LP (linear programming) hierarchy that can find an approximate Nash equilibrium in time comparable to that of the LMM algorithm, although neither algorithm is obviously a generalization of the other. This LP can be viewed as arising from the SoS algorithm at log(n) levels - matching our lower bounds. The lower bounds involve a modification of the Braverman-Ko-Weinstein embedding of CSPs into strategic games and techniques from sum-of-squares proof systems. The upper bound (i.e. analysis of the LP) uses information-theory techniques that have been recently applied to other linear- and semidefinite-programming hierarchies.

Cite as

Aram Harrow, Anand V. Natarajan, and Xiaodi Wu. Tight SoS-Degree Bounds for Approximate Nash Equilibria. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{harrow_et_al:LIPIcs.CCC.2016.22,
  author =	{Harrow, Aram and Natarajan, Anand V. and Wu, Xiaodi},
  title =	{{Tight SoS-Degree Bounds for Approximate Nash Equilibria}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.22},
  URN =		{urn:nbn:de:0030-drops-58565},
  doi =		{10.4230/LIPIcs.CCC.2016.22},
  annote =	{Keywords: Approximate Nash Equilibrium, Sum of Squares, LP, SDP}
}
Document
Understanding PPA-Completeness

Authors: Xiaotie Deng, Jack R. Edmonds, Zhe Feng, Zhengyang Liu, Qi Qi, and Zeying Xu


Abstract
We consider the problem of finding a fully colored base triangle on the 2-dimensional Möbius band under the standard boundary condition, proving it to be PPA-complete. The proof is based on a construction for the DPZP problem, that of finding a zero point under a discrete version of continuity condition. It further derives PPA-completeness for versions on the Möbius band of other related discrete fixed point type problems, and a special version of the Tucker problem, finding an edge such that if the value of one end vertex is x, the other is -x, given a special anti-symmetry boundary condition. More generally, this applies to other non-orientable spaces, including the projective plane and the Klein bottle. However, since those models have a closed boundary, we rely on a version of the PPA that states it as to find another fixed point giving a fixed point. This model also makes it presentationally simple for an extension to a high dimensional discrete fixed point problem on a non-orientable (nearly) hyper-grid with a constant side length.

Cite as

Xiaotie Deng, Jack R. Edmonds, Zhe Feng, Zhengyang Liu, Qi Qi, and Zeying Xu. Understanding PPA-Completeness. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 23:1-23:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.CCC.2016.23,
  author =	{Deng, Xiaotie and Edmonds, Jack R. and Feng, Zhe and Liu, Zhengyang and Qi, Qi and Xu, Zeying},
  title =	{{Understanding PPA-Completeness}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{23:1--23:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.23},
  URN =		{urn:nbn:de:0030-drops-58310},
  doi =		{10.4230/LIPIcs.CCC.2016.23},
  annote =	{Keywords: Fixed Point Computation, PPA-Completeness}
}
Document
Polynomial Bounds for Decoupling, with Applications

Authors: Ryan O'Donnell and Yu Zhao


Abstract
Let f(x) = f(x_1, ..., x_n) = sum_{|S|<=k} a_S prod_{i in S} x_i be an n-variate real multilinear polynomial of degree at most k, where S subseteq [n] = {1, 2, ..., n}. For its one-block decoupled version, vf(y,z) = sum_{abs(S)<=k} a_S sum_{i in S}} y_i prod_{j in S\{i}} z_j, we show tail-bound comparisons of the form Pr(abs(vf)(y,z)) > C_k t} <= D_k Pr(abs(f(x)) > t). Our constants C_k, D_k are significantly better than those known for "full decoupling". For example, when x, y, z are independent Gaussians we obtain C_k = D_k = O(k); when x, by, z are +/-1 random variables we obtain C_k = O(k^2), D_k = k^{O(k)}. By contrast, for full decoupling only C_k = D_k = k^{O(k)} is known in these settings. We describe consequences of these results for query complexity (related to conjectures of Aaronson and Ambainis) and for analysis of Boolean functions (including an optimal sharpening of the DFKO Inequality).

Cite as

Ryan O'Donnell and Yu Zhao. Polynomial Bounds for Decoupling, with Applications. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 24:1-24:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{odonnell_et_al:LIPIcs.CCC.2016.24,
  author =	{O'Donnell, Ryan and Zhao, Yu},
  title =	{{Polynomial Bounds for Decoupling, with Applications}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{24:1--24:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.24},
  URN =		{urn:nbn:de:0030-drops-58520},
  doi =		{10.4230/LIPIcs.CCC.2016.24},
  annote =	{Keywords: Decoupling, Query Complexity, Fourier Analysis, Boolean Functions}
}
Document
Polynomials, Quantum Query Complexity, and Grothendieck's Inequality

Authors: Scott Aaronson, Andris Ambainis, Janis Iraids, Martins Kokainis, and Juris Smotrovs


Abstract
We show an equivalence between 1-query quantum algorithms and representations by degree-2 polynomials. Namely, a partial Boolean function f is computable by a 1-query quantum algorithm with error bounded by epsilon<1/2 iff f can be approximated by a degree-2 polynomial with error bounded by epsilon'<1/2. This result holds for two different notions of approximation by a polynomial: the standard definition of Nisan and Szegedy and the approximation by block-multilinear polynomials recently introduced by Aaronson and Ambainis [Aaronson/Ambainis, STOC 2015]. The proof uses Grothendieck's inequality to relate two matrix norms, with one norm corresponding to polynomial approximations and the other norm corresponding to quantum algorithms. We also show two results for polynomials of higher degree. First, there is a total Boolean function which requires ~Omega(n) quantum queries but can be represented by a block-multilinear polynomial of degree ~O(sqrt(n)). Thus, in the general case (for an arbitrary number of queries), block-multilinear polynomials are not equivalent to quantum algorithms. Second, for any constant degree k, the two notions of approximation by a polynomial (the standard and the block-multilinear) are equivalent. As a consequence, we solve an open problem from [Aaronson/Ambainis, STOC 2015], showing that one can estimate the value of any bounded degree-k polynomial p:{0,1}^n -> [-1,1] with O(n^{1-1/(2k)) queries.

Cite as

Scott Aaronson, Andris Ambainis, Janis Iraids, Martins Kokainis, and Juris Smotrovs. Polynomials, Quantum Query Complexity, and Grothendieck's Inequality. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.CCC.2016.25,
  author =	{Aaronson, Scott and Ambainis, Andris and Iraids, Janis and Kokainis, Martins and Smotrovs, Juris},
  title =	{{Polynomials, Quantum Query Complexity, and Grothendieck's Inequality}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.25},
  URN =		{urn:nbn:de:0030-drops-58394},
  doi =		{10.4230/LIPIcs.CCC.2016.25},
  annote =	{Keywords: quantum algorithms, Boolean functions, approximation by polynomials, Grothendieck's inequality}
}
Document
Sculpting Quantum Speedups

Authors: Scott Aaronson and Shalev Ben-David


Abstract
Given a problem which is intractable for both quantum and classical algorithms, can we find a sub-problem for which quantum algorithms provide an exponential advantage? We refer to this problem as the "sculpting problem." In this work, we give a full characterization of sculptable functions in the query complexity setting. We show that a total function f can be restricted to a promise P such that Q(f|_P)=O(polylog(N)) and R(f|_P)=N^{Omega(1)}, if and only if f has a large number of inputs with large certificate complexity. The proof uses some interesting techniques: for one direction, we introduce new relationships between randomized and quantum query complexity in various settings, and for the other direction, we use a recent result from communication complexity due to Klartag and Regev. We also characterize sculpting for other query complexity measures, such as R(f) vs. R_0(f) and R_0(f) vs. D(f). Along the way, we prove some new relationships for quantum query complexity: for example, a nearly quadratic relationship between Q(f) and D(f) whenever the promise of f is small. This contrasts with the recent super-quadratic query complexity separations, showing that the maximum gap between classical and quantum query complexities is indeed quadratic in various settings - just not for total functions! Lastly, we investigate sculpting in the Turing machine model. We show that if there is any BPP-bi-immune language in BQP, then every language outside BPP can be restricted to a promise which places it in PromiseBQP but not in PromiseBPP. Under a weaker assumption, that some problem in BQP is hard on average for P/poly, we show that every paddable language outside BPP is sculptable in this way.

Cite as

Scott Aaronson and Shalev Ben-David. Sculpting Quantum Speedups. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 26:1-26:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.CCC.2016.26,
  author =	{Aaronson, Scott and Ben-David, Shalev},
  title =	{{Sculpting Quantum Speedups}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{26:1--26:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.26},
  URN =		{urn:nbn:de:0030-drops-58538},
  doi =		{10.4230/LIPIcs.CCC.2016.26},
  annote =	{Keywords: Quantum Computing, Query Complexity, Decision Tree Complexity, Structural Complexity}
}
Document
A Linear Time Algorithm for Quantum 2-SAT

Authors: Niel de Beaudrap and Sevag Gharibian


Abstract
The Boolean constraint satisfaction problem 3-SAT is arguably the canonical NP-complete problem. In contrast, 2-SAT can not only be decided in polynomial time, but in fact in deterministic linear time. In 2006, Bravyi proposed a physically motivated generalization of k-SAT to the quantum setting, defining the problem "quantum k-SAT". He showed that quantum 2-SAT is also solvable in polynomial time on a classical computer, in particular in deterministic time O(n^4), assuming unit-cost arithmetic over a field extension of the rational numbers, where n is number of variables. In this paper, we present an algorithm for quantum 2-SAT which runs in linear time, i.e. deterministic time O(n+m) for n and m the number of variables and clauses, respectively. Our approach exploits the transfer matrix techniques of Laumann et al. [QIC, 2010] used in the study of phase transitions for random quantum 2-SAT, and bears similarities with both the linear time 2-SAT algorithms of Even, Itai, and Shamir (based on backtracking) [SICOMP, 1976] and Aspvall, Plass, and Tarjan (based on strongly connected components) [IPL, 1979].

Cite as

Niel de Beaudrap and Sevag Gharibian. A Linear Time Algorithm for Quantum 2-SAT. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 27:1-27:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{debeaudrap_et_al:LIPIcs.CCC.2016.27,
  author =	{de Beaudrap, Niel and Gharibian, Sevag},
  title =	{{A Linear Time Algorithm for Quantum 2-SAT}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{27:1--27:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.27},
  URN =		{urn:nbn:de:0030-drops-58363},
  doi =		{10.4230/LIPIcs.CCC.2016.27},
  annote =	{Keywords: quantum 2-SAT, transfer matrix, strongly connected components, limited backtracking, local Hamiltonian}
}
Document
Complexity Classification of Two-Qubit Commuting Hamiltonians

Authors: Adam Bouland, Laura Mancinska, and Xue Zhang


Abstract
We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.

Cite as

Adam Bouland, Laura Mancinska, and Xue Zhang. Complexity Classification of Two-Qubit Commuting Hamiltonians. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 28:1-28:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2016.28,
  author =	{Bouland, Adam and Mancinska, Laura and Zhang, Xue},
  title =	{{Complexity Classification of Two-Qubit Commuting Hamiltonians}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{28:1--28:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.28},
  URN =		{urn:nbn:de:0030-drops-58469},
  doi =		{10.4230/LIPIcs.CCC.2016.28},
  annote =	{Keywords: Quantum Computing, Sampling Problems, Commuting Hamiltonians, IQP, Gate Classification Theorems}
}
Document
Identity Testing for Constant-Width, and Commutative, Read-Once Oblivious ABPs

Authors: Rohit Gurjar, Arpita Korwar, and Nitin Saxena


Abstract
We give improved hitting-sets for two special cases of Read-once Oblivious Arithmetic Branching Programs (ROABP). First is the case of an ROABP with known variable order. The best hitting-set known for this case had cost (nw)^{O(log(n))}, where n is the number of variables and w is the width of the ROABP. Even for a constant-width ROABP, nothing better than a quasi-polynomial bound was known. We improve the hitting-set complexity for the known-order case to n^{O(log(w))}. In particular, this gives the first polynomial time hitting-set for constant-width ROABP (known-order). However, our hitting-set works only over those fields whose characteristic is zero or large enough. To construct the hitting-set, we use the concept of the rank of partial derivative matrix. Unlike previous approaches whose starting point is a monomial map, we use a polynomial map directly. The second case we consider is that of commutative ROABP. The best known hitting-set for this case had cost d^{O(log(w))}(nw)^{O(log(log(w)))}, where d is the individual degree. We improve this hitting-set complexity to (ndw)^{O(log(log(w)))}. We get this by achieving rank concentration more efficiently.

Cite as

Rohit Gurjar, Arpita Korwar, and Nitin Saxena. Identity Testing for Constant-Width, and Commutative, Read-Once Oblivious ABPs. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gurjar_et_al:LIPIcs.CCC.2016.29,
  author =	{Gurjar, Rohit and Korwar, Arpita and Saxena, Nitin},
  title =	{{Identity Testing for Constant-Width, and Commutative, Read-Once Oblivious ABPs}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.29},
  URN =		{urn:nbn:de:0030-drops-58438},
  doi =		{10.4230/LIPIcs.CCC.2016.29},
  annote =	{Keywords: PIT, hitting-set, constant-width ROABPs, commutative ROABPs}
}
Document
Identity Testing and Lower Bounds for Read-k Oblivious Algebraic Branching Programs

Authors: Matthew Anderson, Michael A. Forbes, Ramprasad Saptharishi, Amir Shpilka, and Ben Lee Volk


Abstract
Read-k oblivious algebraic branching programs are a natural generalization of the well-studied model of read-once oblivious algebraic branching program (ROABPs). In this work, we give an exponential lower bound of exp(n/k^{O(k)}) on the width of any read-k oblivious ABP computing some explicit multilinear polynomial f that is computed by a polynomial size depth-3 circuit. We also study the polynomial identity testing (PIT) problem for this model and obtain a white-box subexponential-time PIT algorithm. The algorithm runs in time 2^{~O(n^{1-1/2^{k-1}})} and needs white box access only to know the order in which the variables appear in the ABP.

Cite as

Matthew Anderson, Michael A. Forbes, Ramprasad Saptharishi, Amir Shpilka, and Ben Lee Volk. Identity Testing and Lower Bounds for Read-k Oblivious Algebraic Branching Programs. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 30:1-30:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{anderson_et_al:LIPIcs.CCC.2016.30,
  author =	{Anderson, Matthew and Forbes, Michael A. and Saptharishi, Ramprasad and Shpilka, Amir and Volk, Ben Lee},
  title =	{{Identity Testing and Lower Bounds for Read-k Oblivious Algebraic Branching Programs}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{30:1--30:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.30},
  URN =		{urn:nbn:de:0030-drops-58255},
  doi =		{10.4230/LIPIcs.CCC.2016.30},
  annote =	{Keywords: Algebraic Complexity, Lower Bounds, Derandomization, Polynomial Identity Testing}
}
Document
Reconstruction of Real Depth-3 Circuits with Top Fan-In 2

Authors: Gaurav Sinha


Abstract
Reconstruction of arithmetic circuits has been heavily studied in the past few years and has connections to proving lower bounds and deterministic identity testing. In this paper we present a polynomial time randomized algorithm for reconstructing SigmaPiSigma(2) circuits over F (char(F)=0), i.e. depth-3 circuits with fan-in 2 at the top addition gate and having coefficients from a field of characteristic 0. The algorithm needs only a blackbox query access to the polynomial f in F[x_1,..., x_n] of degree d, computable by a SigmaPiSigma(2) circuit C. In addition, we assume that the "simple rank" of this polynomial (essential number of variables after removing the gcd of the two multiplication gates) is bigger than a fixed constant. Our algorithm runs in time poly(n,d) and returns an equivalent SigmaPiSigma(2) circuit (with high probability). The problem of reconstructing SigmaPiSigma(2) circuits over finite fields was first proposed by Shpilka [Shpilka, STOC 2007]. The generalization to SigmaPiSigma(k) circuits, k = O(1) (over finite fields) was addressed by Karnin and Shpilka in [Karnin/Shpilka, CCC 2015]. The techniques in these previous involve iterating over all objects of certain kinds over the ambient field and thus the running time depends on the size of the field F. Their reconstruction algorithm uses lower bounds on the lengths of Linear Locally Decodable Codes with 2 queries. In our settings, such ideas immediately pose a problem and we need new ideas to handle the case of the characteristic 0 field F. Our main techniques are based on the use of Quantitative Sylvester Gallai Theorems from the work of Barak et al. [Barak/Dvir/Wigderson/Yehudayoff, STOC 2011] to find a small collection of "nice" subspaces to project onto. The heart of our paper lies in subtle applications of the Quantitative Sylvester Gallai theorems to prove why projections w.r.t. the "nice" subspaces can be "glued". We also use Brill's Equations from [Gelfand/Kapranov/Zelevinsky, 1994] to construct a small set of candidate linear forms (containing linear forms from both gates). Another important technique which comes very handy is the polynomial time randomized algorithm for factoring multivariate polynomials given by Kaltofen [Kaltofen/Trager, J. Symb. Comp. 1990].

Cite as

Gaurav Sinha. Reconstruction of Real Depth-3 Circuits with Top Fan-In 2. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 31:1-31:53, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{sinha:LIPIcs.CCC.2016.31,
  author =	{Sinha, Gaurav},
  title =	{{Reconstruction of Real Depth-3 Circuits with Top Fan-In 2}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{31:1--31:53},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.31},
  URN =		{urn:nbn:de:0030-drops-58545},
  doi =		{10.4230/LIPIcs.CCC.2016.31},
  annote =	{Keywords: Reconstruction, SigmaPiSigma(2), Sylvester-Gallai, Brill's Equations}
}
Document
Proof Complexity Lower Bounds from Algebraic Circuit Complexity

Authors: Michael A. Forbes, Amir Shpilka, Iddo Tzameret, and Avi Wigderson


Abstract
We give upper and lower bounds on the power of subsystems of the Ideal Proof System (IPS), the algebraic proof system recently proposed by Grochow and Pitassi, where the circuits comprising the proof come from various restricted algebraic circuit classes. This mimics an established research direction in the boolean setting for subsystems of Extended Frege proofs whose lines are circuits from restricted boolean circuit classes. Essentially all of the subsystems considered in this paper can simulate the well-studied Nullstellensatz proof system, and prior to this work there were no known lower bounds when measuring proof size by the algebraic complexity of the polynomials (except with respect to degree, or to sparsity). Our main contributions are two general methods of converting certain algebraic lower bounds into proof complexity ones. Both require stronger arithmetic lower bounds than common, which should hold not for a specific polynomial but for a whole family defined by it. These may be likened to some of the methods by which Boolean circuit lower bounds are turned into related proof-complexity ones, especially the "feasible interpolation" technique. We establish algebraic lower bounds of these forms for several explicit polynomials, against a variety of classes, and infer the relevant proof complexity bounds. These yield separations between IPS subsystems, which we complement by simulations to create a partial structure theory for IPS systems. Our first method is a functional lower bound, a notion of Grigoriev and Razborov, which is a function f' from n-bit strings to a field, such that any polynomial f agreeing with f' on the boolean cube requires large algebraic circuit complexity. We develop functional lower bounds for a variety of circuit classes (sparse polynomials, depth-3 powering formulas, read-once algebraic branching programs and multilinear formulas) where f'(x) equals 1/p(x) for a constant-degree polynomial p depending on the relevant circuit class. We believe these lower bounds are of independent interest in algebraic complexity, and show that they also imply lower bounds for the size of the corresponding IPS refutations for proving that the relevant polynomial p is non-zero over the boolean cube. In particular, we show super-polynomial lower bounds for refuting variants of the subset-sum axioms in these IPS subsystems. Our second method is to give lower bounds for multiples, that is, to give explicit polynomials whose all (non-zero) multiples require large algebraic circuit complexity. By extending known techniques, we give lower bounds for multiples for various restricted circuit classes such sparse polynomials, sums of powers of low-degree polynomials, and roABPs. These results are of independent interest, as we argue that lower bounds for multiples is the correct notion for instantiating the algebraic hardness versus randomness paradigm of Kabanets and Impagliazzo. Further, we show how such lower bounds for multiples extend to lower bounds for refutations in the corresponding IPS subsystem.

Cite as

Michael A. Forbes, Amir Shpilka, Iddo Tzameret, and Avi Wigderson. Proof Complexity Lower Bounds from Algebraic Circuit Complexity. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 32:1-32:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{forbes_et_al:LIPIcs.CCC.2016.32,
  author =	{Forbes, Michael A. and Shpilka, Amir and Tzameret, Iddo and Wigderson, Avi},
  title =	{{Proof Complexity Lower Bounds from Algebraic Circuit Complexity}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{32:1--32:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.32},
  URN =		{urn:nbn:de:0030-drops-58321},
  doi =		{10.4230/LIPIcs.CCC.2016.32},
  annote =	{Keywords: Proof Complexity, Algebraic Complexity, Nullstellensatz, Subset-Sum}
}
Document
Functional Lower Bounds for Arithmetic Circuits and Connections to Boolean Circuit Complexity

Authors: Michael A. Forbes, Mrinal Kumar, and Ramprasad Saptharishi


Abstract
We say that a circuit C over a field F {functionally} computes a polynomial P in F[x_1, x_2, ..., x_n] if for every x in {0,1}^n we have that C(x) = P(x). This is in contrast to syntactically computing P, when C = P as formal polynomials. In this paper, we study the question of proving lower bounds for homogeneous depth-3 and depth-4 arithmetic circuits for functional computation. We prove the following results: 1. Exponential lower bounds for homogeneous depth-3 arithmetic circuits for a polynomial in VNP. 2. Exponential lower bounds for homogeneous depth-4 arithmetic circuits with bounded individual degree for a polynomial in VNP. Our main motivation for this line of research comes from our observation that strong enough functional lower bounds for even very special depth-4 arithmetic circuits for the Permanent imply a separation between #P and ACC0. Thus, improving the second result to get rid of the bounded individual degree condition could lead to substantial progress in boolean circuit complexity. Besides, it is known from a recent result of Kumar and Saptharishi [Kumar/Saptharishi, ECCC 2015] that over constant sized finite fields, strong enough {average case} functional lower bounds for homogeneous depth-4 circuits imply superpolynomial lower bounds for homogeneous depth-5 circuits. Our proofs are based on a family of new complexity measures called shifted evaluation dimension, and might be of independent interest.

Cite as

Michael A. Forbes, Mrinal Kumar, and Ramprasad Saptharishi. Functional Lower Bounds for Arithmetic Circuits and Connections to Boolean Circuit Complexity. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 33:1-33:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{forbes_et_al:LIPIcs.CCC.2016.33,
  author =	{Forbes, Michael A. and Kumar, Mrinal and Saptharishi, Ramprasad},
  title =	{{Functional Lower Bounds for Arithmetic Circuits and Connections to Boolean Circuit Complexity}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{33:1--33:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.33},
  URN =		{urn:nbn:de:0030-drops-58266},
  doi =		{10.4230/LIPIcs.CCC.2016.33},
  annote =	{Keywords: boolean circuits, arithmetic circuits, lower bounds, functional computation}
}
Document
Arithmetic Circuits with Locally Low Algebraic Rank

Authors: Mrinal Kumar and Shubhangi Saraf


Abstract
In recent years there has been a flurry of activity proving lower bounds for homogeneous depth-4 arithmetic circuits, which has brought us very close to statements that are known to imply VP != VNP. It is a big question to go beyond homogeneity, and in this paper we make progress towards this by considering depth-4 circuits of low algebraic rank, which are a natural extension of homogeneous depth-4 arithmetic circuits. A depth-4 circuit is a representation of an N-variate, degree n polynomial P as P = sum_{i=1}^T Q_{i1} * Q_{i2} * ... * Q_{it} where the Q_{ij} are given by their monomial expansion. Homogeneity adds the constraint that for every i in [T], sum_{j} degree(Q_{ij}) = n. We study an extension where, for every i in [T], the algebraic rank of the set of polynomials {Q_{i1}, Q_{i2}, ... ,Q_{it}} is at most some parameter k. We call this the class of spnew circuits. Already for k=n, these circuits are a strong generalization of the class of homogeneous depth-4 circuits, where in particular t<=n (and hence k<=n). We study lower bounds and polynomial identity tests for such circuits and prove the following results. 1. Lower bounds: We give an explicit family of polynomials {P_n} of degree n in N = n^{O(1)} variables in VNP, such that any spnewn circuit computing P_n has size at least exp{(Omega(sqrt(n)*log(N)))}. This strengthens and unifies two lines of work: it generalizes the recent exponential lower bounds for homogeneous depth-4 circuits [KLSS14, KS-full] as well as the Jacobian based lower bounds of Agrawal et al. which worked for spnew circuits in the restricted setting where T * k <= n. 2. Hitting sets: Let spnewbounded be the class of spnew circuits with bottom fan-in at most d. We show that if d and k are at most poly(log(N)), then there is an explicit hitting set for spnewbounded circuits of size quasipolynomial in N and the size of the circuit. This strengthens a result of Forbes which showed such quasipolynomial sized hitting sets in the setting where d and t are at most poly(log(N)). A key technical ingredient of the proofs is a result which states that over any field of characteristic zero (or sufficiently large characteristic), upto a translation, every polynomial in a set of algebraically dependent polynomials can be written as a function of the polynomials in the transcendence basis. We believe this may be of independent interest. We combine this with shifted partial derivative based methods to obtain our final results.

Cite as

Mrinal Kumar and Shubhangi Saraf. Arithmetic Circuits with Locally Low Algebraic Rank. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 34:1-34:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CCC.2016.34,
  author =	{Kumar, Mrinal and Saraf, Shubhangi},
  title =	{{Arithmetic Circuits with Locally Low Algebraic Rank}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{34:1--34:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.34},
  URN =		{urn:nbn:de:0030-drops-58288},
  doi =		{10.4230/LIPIcs.CCC.2016.34},
  annote =	{Keywords: algebraic independence, arithmetic circuits, lower bounds}
}
Document
Sums of Products of Polynomials in Few Variables: Lower Bounds and Polynomial Identity Testing

Authors: Mrinal Kumar and Shubhangi Saraf


Abstract
We study the complexity of representing polynomials as a sum of products of polynomials in few variables. More precisely, we study representations of the form P = sum_{i=1}^T prod_{j=1}^d Q_{ij} such that each Q_{ij} is an arbitrary polynomial that depends on at most s variables. We prove the following results. 1. Over fields of characteristic zero, for every constant mu such that 0<=mu<=1, we give an explicit family of polynomials {P_{N}}, where P_{N} is of degree n in N = n^{O(1)} variables, such that any representation of the above type for P_{N} with s = N^{mu} requires Td >= n^{Omega(sqrt(n))}. This strengthens a recent result of Kayal and Saha [Kayal/Saha, ECCC 2014] which showed similar lower bounds for the model of sums of products of linear forms in few variables. It is known that any asymptotic improvement in the exponent of the lower bounds (even for s=sqrt(n)) would separate VP and VNP [Kayal/Saha, ECCC 2014]. 2. We obtain a deterministic subexponential time blackbox polynomial identity testing (PIT) algorithm for circuits computed by the above model when T and the individual degree of each variable in P are at most log^{O(1)}(N) and s<=N^{mu} for any constant mu<1/2. We get quasipolynomial running time when s<log^{O(1)}(N). The PIT algorithm is obtained by combining our lower bounds with the hardness-randomness tradeoffs developed in [Dvir/Shpilka/Yehudayoff, SIAM J. Comp. 2009; Kabanets/Impagliazzo, Comp. Compl. 2004]. To the best of our knowledge, this is the first nontrivial PIT algorithm for this model (even for the case s=2), and the first nontrivial PIT algorithm obtained from lower bounds for small depth circuits.

Cite as

Mrinal Kumar and Shubhangi Saraf. Sums of Products of Polynomials in Few Variables: Lower Bounds and Polynomial Identity Testing. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 35:1-35:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CCC.2016.35,
  author =	{Kumar, Mrinal and Saraf, Shubhangi},
  title =	{{Sums of Products of Polynomials in Few Variables: Lower Bounds and Polynomial Identity Testing}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{35:1--35:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.35},
  URN =		{urn:nbn:de:0030-drops-58270},
  doi =		{10.4230/LIPIcs.CCC.2016.35},
  annote =	{Keywords: arithmetic circuits, lower bounds, polynomial identity testing, hardness vs randomness}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail