LIPIcs, Volume 79

32nd Computational Complexity Conference (CCC 2017)



Thumbnail PDF

Event

CCC 2017, July 6-9, 2017, Riga, Latvia

Editor

Ryan O'Donnell

Publication Details

  • published at: 2017-08-01
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-040-8
  • DBLP: db/conf/coco/coco2017

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 79, CCC'17, Complete Volume

Authors: Ryan O'Donnell


Abstract
LIPIcs, Volume 79, CCC'17, Complete Volume

Cite as

32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{odonnell:LIPIcs.CCC.2017,
  title =	{{LIPIcs, Volume 79, CCC'17, Complete Volume}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017},
  URN =		{urn:nbn:de:0030-drops-76639},
  doi =		{10.4230/LIPIcs.CCC.2017},
  annote =	{Keywords: Computation by Abstract Device}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers

Authors: Ryan O'Donnell


Abstract
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers, List of Authors

Cite as

32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{odonnell:LIPIcs.CCC.2017.0,
  author =	{O'Donnell, Ryan},
  title =	{{Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.0},
  URN =		{urn:nbn:de:0030-drops-75115},
  doi =		{10.4230/LIPIcs.CCC.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers, List of Authors}
}
Document
Random Resolution Refutations

Authors: Pavel Pudlák and Neil Thapen


Abstract
We study the random resolution refutation system definedin [Buss et al. 2014]. This attempts to capture the notion of a resolution refutation that may make mistakes but is correct most of the time. By proving the equivalence of several different definitions, we show that this concept is robust. On the other hand, if P does not equal NP, then random resolution cannot be polynomially simulated by any proof system in which correctness of proofs is checkable in polynomial time. We prove several upper and lower bounds on the width and size of random resolution refutations of explicit and random unsatisfiable CNF formulas. Our main result is a separation between polylogarithmic width random resolution and quasipolynomial size resolution, which solves the problem stated in [Buss et al. 2014]. We also prove exponential size lower bounds on random resolution refutations of the pigeonhole principle CNFs, and of a family of CNFs which have polynomial size refutations in constant depth Frege.

Cite as

Pavel Pudlák and Neil Thapen. Random Resolution Refutations. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 1:1-1:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{pudlak_et_al:LIPIcs.CCC.2017.1,
  author =	{Pudl\'{a}k, Pavel and Thapen, Neil},
  title =	{{Random Resolution Refutations}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{1:1--1:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.1},
  URN =		{urn:nbn:de:0030-drops-75235},
  doi =		{10.4230/LIPIcs.CCC.2017.1},
  annote =	{Keywords: proof complexity, random, resolution}
}
Document
Graph Colouring is Hard for Algorithms Based on Hilbert's Nullstellensatz and Gröbner Bases

Authors: Massimo Lauria and Jakob Nordström


Abstract
We consider the graph k-colouring problem encoded as a set of polynomial equations in the standard way. We prove that there are bounded-degree graphs that do not have legal k-colourings but for which the polynomial calculus proof system defined in [Clegg et al. '96, Alekhnovich et al. '02] requires linear degree, and hence exponential size, to establish this fact. This implies a linear degree lower bound for any algorithms based on Gröbner bases solving graph k-colouring} using this encoding. The same bound applies also for the algorithm studied in a sequence of papers [De Loera et al. '08, '09, '11, '15] based on Hilbert's Nullstellensatz proofs for a slightly different encoding, thus resolving an open problem mentioned, e.g., in [De Loera et al. '09] and [Li et al. '16]. We obtain our results by combining the polynomial calculus degree lower bound for functional pigeonhole principle (FPHP) formulas over bounded-degree bipartite graphs in [Miksa and Nordström '15] with a reduction from FPHP to k-colouring derivable by polynomial calculus in constant degree.

Cite as

Massimo Lauria and Jakob Nordström. Graph Colouring is Hard for Algorithms Based on Hilbert's Nullstellensatz and Gröbner Bases. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lauria_et_al:LIPIcs.CCC.2017.2,
  author =	{Lauria, Massimo and Nordstr\"{o}m, Jakob},
  title =	{{Graph Colouring is Hard for Algorithms Based on Hilbert's Nullstellensatz and Gr\"{o}bner Bases}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.2},
  URN =		{urn:nbn:de:0030-drops-75410},
  doi =		{10.4230/LIPIcs.CCC.2017.2},
  annote =	{Keywords: proof complexity, Nullstellensatz, Gr\"{o}bner basis, polynomial calculus, cutting planes, colouring}
}
Document
Representations of Monotone Boolean Functions by Linear Programs

Authors: Mateus de Oliveira Oliveira and Pavel Pudlák


Abstract
We introduce the notion of monotone linear-programming circuits (MLP circuits), a model of computation for partial Boolean functions. Using this model, we prove the following results. 1. MLP circuits are superpolynomially stronger than monotone Boolean circuits. 2. MLP circuits are exponentially stronger than monotone span programs. 3. MLP circuits can be used to provide monotone feasibility interpolation theorems for Lovasz-Schrijver proof systems, and for mixed Lovasz-Schrijver proof systems. 4. The Lovasz-Schrijver proof system cannot be polynomially simulated by the cutting planes proof system. This is the first result showing a separation between these two proof systems. Finally, we discuss connections between the problem of proving lower bounds on the size of MLPs and the problem of proving lower bounds on extended formulations of polytopes.

Cite as

Mateus de Oliveira Oliveira and Pavel Pudlák. Representations of Monotone Boolean Functions by Linear Programs. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{deoliveiraoliveira_et_al:LIPIcs.CCC.2017.3,
  author =	{de Oliveira Oliveira, Mateus and Pudl\'{a}k, Pavel},
  title =	{{Representations of Monotone Boolean Functions by Linear Programs}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.3},
  URN =		{urn:nbn:de:0030-drops-75200},
  doi =		{10.4230/LIPIcs.CCC.2017.3},
  annote =	{Keywords: Monotone Linear Programming Circuits, Lov\'{a}sz-Schrijver Proof System, Cutting-Planes Proof System, Feasible Interpolation, Lower Bounds}
}
Document
A Note on Amortized Branching Program Complexity

Authors: Aaron Potechin


Abstract
In this paper, we show that while almost all functions require exponential size branching programs to compute, for all functions f there is a branching program computing a doubly exponential number of copies of f which has linear size per copy of f. This result disproves a conjecture about non-uniform catalytic computation, rules out a certain type of bottleneck argument for proving non-monotone space lower bounds, and can be thought of as a constructive analogue of Razborov's result that submodular complexity measures have maximum value O(n).

Cite as

Aaron Potechin. A Note on Amortized Branching Program Complexity. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{potechin:LIPIcs.CCC.2017.4,
  author =	{Potechin, Aaron},
  title =	{{A Note on Amortized Branching Program Complexity}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.4},
  URN =		{urn:nbn:de:0030-drops-75448},
  doi =		{10.4230/LIPIcs.CCC.2017.4},
  annote =	{Keywords: branching programs, space complexity, amortization}
}
Document
Derandomizing Isolation in Space-Bounded Settings

Authors: Dieter van Melkebeek and Gautam Prakriya


Abstract
We study the possibility of deterministic and randomness-efficient isolation in space-bounded models of computation: Can one efficiently reduce instances of computational problems to equivalent instances that have at most one solution? We present results for the NL-complete problem of reachability on digraphs, and for the LogCFL-complete problem of certifying acceptance on shallow semi-unbounded circuits. A common approach employs small weight assignments that make the solution of minimum weight unique. The Isolation Lemma and other known procedures use Omega(n) random bits to generate weights of individual bitlength O(log(n)). We develop a derandomized version for both settings that uses O(log(n)^{3/2}) random bits and produces weights of bitlength O(log(n)^{3/2}) in logarithmic space. The construction allows us to show that every language in NL can be accepted by a nondeterministic machine that runs in polynomial time and O(log(n)^{3/2}) space, and has at most one accepting computation path on every input. Similarly, every language in LogCFL can be accepted by a nondeterministic machine equipped with a stack that does not count towards the space bound, that runs in polynomial time and O(log(n)^{3/2}) space, and has at most one accepting computation path on every input. We also show that the existence of somewhat more restricted isolations for reachability on digraphs implies that NL can be decided in logspace with polynomial advice. A similar result holds for certifying acceptance on shallow semi-unbounded circuits and LogCFL.

Cite as

Dieter van Melkebeek and Gautam Prakriya. Derandomizing Isolation in Space-Bounded Settings. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 5:1-5:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{vanmelkebeek_et_al:LIPIcs.CCC.2017.5,
  author =	{van Melkebeek, Dieter and Prakriya, Gautam},
  title =	{{Derandomizing Isolation in Space-Bounded Settings}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{5:1--5:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.5},
  URN =		{urn:nbn:de:0030-drops-75297},
  doi =		{10.4230/LIPIcs.CCC.2017.5},
  annote =	{Keywords: Isolation lemma, derandomization, unambiguous nondeterminism, graph reachability, circuit certification}
}
Document
The Computational Complexity of Integer Programming with Alternations

Authors: Danny Nguyen and Igor Pak


Abstract
We prove that integer programming with three alternating quantifiers is NP-complete, even for a fixed number of variables. This complements earlier results by Lenstra and Kannan, which together say that integer programming with at most two alternating quantifiers can be done in polynomial time for a fixed number of variables. As a byproduct of the proof, we show that for two polytopes P, Q in R^4, counting the projection of integer points in Q\P is #P-complete. This contrasts the 2003 result by Barvinok and Woods, which allows counting in polynomial time the projection of integer points in P and Q separately.

Cite as

Danny Nguyen and Igor Pak. The Computational Complexity of Integer Programming with Alternations. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nguyen_et_al:LIPIcs.CCC.2017.6,
  author =	{Nguyen, Danny and Pak, Igor},
  title =	{{The Computational Complexity of Integer Programming with Alternations}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{6:1--6:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.6},
  URN =		{urn:nbn:de:0030-drops-75151},
  doi =		{10.4230/LIPIcs.CCC.2017.6},
  annote =	{Keywords: Integer Programming, Alternations, Projection of Integer Points}
}
Document
On the Average-Case Complexity of MCSP and Its Variants

Authors: Shuichi Hirahara and Rahul Santhanam


Abstract
We prove various results on the complexity of MCSP (Minimum Circuit Size Problem) and the related MKTP (Minimum Kolmogorov Time-Bounded Complexity Problem): * We observe that under standard cryptographic assumptions, MCSP has a pseudorandom self-reduction. This is a new notion we define by relaxing the notion of a random self-reduction to allow queries to be pseudorandom rather than uniformly random. As a consequence we derive a weak form of a worst-case to average-case reduction for (a promise version of) MCSP. Our result also distinguishes MCSP from natural NP-complete problems, which are not known to have worst-case to average-case reductions. Indeed, it is known that strong forms of worst-case to average-case reductions for NP-complete problems collapse the Polynomial Hierarchy. * We prove the first non-trivial formula size lower bounds for MCSP by showing that MCSP requires nearly quadratic-size De Morgan formulas. * We show average-case superpolynomial size lower bounds for MKTP against AC0[p] for any prime p. * We show the hardness of MKTP on average under assumptions that have been used in much recent work, such as Feige's assumptions, Alekhnovich's assumption and the Planted Clique conjecture. In addition, MCSP is hard under Alekhnovich's assumption. Using a version of Feige's assumption against co-nondeterministic algorithms that has been conjectured recently, we provide evidence for the first time that MKTP is not in coNP. Our results suggest that it might worthwhile to focus on the average-case hardness of MKTP and MCSP when approaching the question of whether these problems are NP-hard.

Cite as

Shuichi Hirahara and Rahul Santhanam. On the Average-Case Complexity of MCSP and Its Variants. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2017.7,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{On the Average-Case Complexity of MCSP and Its Variants}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.7},
  URN =		{urn:nbn:de:0030-drops-75406},
  doi =		{10.4230/LIPIcs.CCC.2017.7},
  annote =	{Keywords: minimum circuit size problem, average-case complexity, circuit lower bounds, time-bounded Kolmogorov complexity, hardness}
}
Document
Easiness Amplification and Uniform Circuit Lower Bounds

Authors: Cody D. Murray and R. Ryan Williams


Abstract
We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences: * An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false. * Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space. * A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems.

Cite as

Cody D. Murray and R. Ryan Williams. Easiness Amplification and Uniform Circuit Lower Bounds. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 8:1-8:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.CCC.2017.8,
  author =	{Murray, Cody D. and Williams, R. Ryan},
  title =	{{Easiness Amplification and Uniform Circuit Lower Bounds}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.8},
  URN =		{urn:nbn:de:0030-drops-75421},
  doi =		{10.4230/LIPIcs.CCC.2017.8},
  annote =	{Keywords: uniform circuit complexity, time complexity, space complexity, non-relativizing, amplification}
}
Document
PPSZ for General k-SAT - Making Hertli's Analysis Simpler and 3-SAT Faster

Authors: Dominik Scheder and John P. Steinberger


Abstract
The currently fastest known algorithm for k-SAT is PPSZ named after its inventors Paturi, Pudlak, Saks, and Zane. Analyzing its running time is much easier for input formulas with a unique satisfying assignment. In this paper, we achieve three goals. First, we simplify Hertli's analysis for input formulas with multiple satisfying assignments. Second, we show a "translation result": if you improve PPSZ for k-CNF formulas with a unique satisfying assignment, you will immediately get a (weaker) improvement for general k-CNF formulas. Combining this with a result by Hertli from 2014, in which he gives an algorithm for Unique-3-SAT slightly beating PPSZ, we obtain an algorithm beating PPSZ for general 3-SAT, thus obtaining the so far best known worst-case bounds for 3-SAT.

Cite as

Dominik Scheder and John P. Steinberger. PPSZ for General k-SAT - Making Hertli's Analysis Simpler and 3-SAT Faster. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{scheder_et_al:LIPIcs.CCC.2017.9,
  author =	{Scheder, Dominik and Steinberger, John P.},
  title =	{{PPSZ for General k-SAT - Making Hertli's Analysis Simpler and 3-SAT Faster}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.9},
  URN =		{urn:nbn:de:0030-drops-75355},
  doi =		{10.4230/LIPIcs.CCC.2017.9},
  annote =	{Keywords: Boolean satisfiability, exponential algorithms, randomized algorithms}
}
Document
Noise Stability Is Computable and Approximately Low-Dimensional

Authors: Anindya De, Elchanan Mossel, and Joe Neeman


Abstract
Questions of noise stability play an important role in hardness of approximation in computer science as well as in the theory of voting. In many applications, the goal is to find an optimizer of noise stability among all possible partitions of R^n for n >= 1 to k parts with given Gaussian measures mu_1, ..., mu_k. We call a partition epsilon-optimal, if its noise stability is optimal up to an additive epsilon. In this paper, we give an explicit, computable function n(epsilon) such that an epsilon-optimal partition exists in R^{n(epsilon)}. This result has implications for the computability of certain problems in non-interactive simulation, which are addressed in a subsequent work.

Cite as

Anindya De, Elchanan Mossel, and Joe Neeman. Noise Stability Is Computable and Approximately Low-Dimensional. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 10:1-10:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.CCC.2017.10,
  author =	{De, Anindya and Mossel, Elchanan and Neeman, Joe},
  title =	{{Noise Stability Is Computable and Approximately Low-Dimensional}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{10:1--10:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.10},
  URN =		{urn:nbn:de:0030-drops-75390},
  doi =		{10.4230/LIPIcs.CCC.2017.10},
  annote =	{Keywords: Gaussian noise stability; Plurality is stablest; Ornstein Uhlenbeck operator}
}
Document
From Weak to Strong LP Gaps for All CSPs

Authors: Mrinalkanti Ghosh and Madhur Tulsiani


Abstract
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by Omega(log(n)/log(log(n))) levels of the Sherali-Adams hierarchy on instances of size n. It was proved by Chan et al. [FOCS 2013] (and recently strengthened by Kothari et al. [STOC 2017]) that for CSPs, any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than simply the basic LP, which can be thought of as the base level of the Sherali-Adams hierarchy. This essentially gives a dichotomy result for approximation of CSPs by polynomial size LP extended formulations. Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which Omega(loglog n) levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to Omega(log(n)/log(log(n))) levels.

Cite as

Mrinalkanti Ghosh and Madhur Tulsiani. From Weak to Strong LP Gaps for All CSPs. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 11:1-11:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.CCC.2017.11,
  author =	{Ghosh, Mrinalkanti and Tulsiani, Madhur},
  title =	{{From Weak to Strong LP Gaps for All CSPs}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{11:1--11:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.11},
  URN =		{urn:nbn:de:0030-drops-75370},
  doi =		{10.4230/LIPIcs.CCC.2017.11},
  annote =	{Keywords: Constraint Satisfaction Problem, Convex Programming, Linear Programming Hierarchy, Integrality Gap}
}
Document
Query-to-Communication Lifting for P^NP

Authors: Mika Göös, Pritish Kamath, Toniann Pitassi, and Thomas Watson


Abstract
We prove that the P^NP-type query complexity (alternatively, decision list width) of any boolean function f is quadratically related to the P^NP-type communication complexity of a lifted version of f. As an application, we show that a certain "product" lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture P^NP communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014).

Cite as

Mika Göös, Pritish Kamath, Toniann Pitassi, and Thomas Watson. Query-to-Communication Lifting for P^NP. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.CCC.2017.12,
  author =	{G\"{o}\"{o}s, Mika and Kamath, Pritish and Pitassi, Toniann and Watson, Thomas},
  title =	{{Query-to-Communication Lifting for P^NP}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.12},
  URN =		{urn:nbn:de:0030-drops-75388},
  doi =		{10.4230/LIPIcs.CCC.2017.12},
  annote =	{Keywords: Communication Complexity, Query Complexity, Lifting Theorem, P^NP}
}
Document
Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials

Authors: Roei Tell


Abstract
This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class cal{C} and a parameter B=B(n), given a circuit C in cal{C} with n input bits, decide whether C rejects all of its inputs, or accepts all but B(n) of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the "threshold" values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization. For constant-depth circuits, we construct an algorithm for quantified derandomization that works for a parameter B(n) that is only slightly smaller than a "threshold" parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For constant-depth circuits with parity gates, we lower a "threshold" of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-3 circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate polynomials over large fields that vanish rarely, and prove two lower bounds on the seed length of such generators. Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a "good" object is to construct a simple deterministic test that decides the set of good objects, and then "fool" that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test.

Cite as

Roei Tell. Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 13:1-13:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tell:LIPIcs.CCC.2017.13,
  author =	{Tell, Roei},
  title =	{{Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{13:1--13:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.13},
  URN =		{urn:nbn:de:0030-drops-75349},
  doi =		{10.4230/LIPIcs.CCC.2017.13},
  annote =	{Keywords: Computational complexity, derandomization, quantified derandomization, hitting-set generator, constant-depth circuits}
}
Document
Bounded Independence Plus Noise Fools Products

Authors: Elad Haramaty, Chin Ho Lee, and Emanuele Viola


Abstract
Let D be a b-wise independent distribution over {0,1}^m. Let E be the "noise" distribution over {0,1}^m where the bits are independent and each bit is 1 with probability eta/2. We study which tests f: {0,1}^m -> [-1,1] are epsilon-fooled by D+E, i.e., |E[f(D+E)] - E[f(U)]| <= epsilon where U is the uniform distribution. We show that D+E epsilon-fools product tests f: ({0,1}^n)^k -> [-1,1] given by the product of k bounded functions on disjoint n-bit inputs with error epsilon = k(1-eta)^{Omega(b^2/m)}, where m = nk and b >= n. This bound is tight when b = Omega(m) and eta >= (log k)/m. For b >= m^{2/3} log m and any constant eta the distribution D+E also 0.1-fools log-space algorithms. We develop two applications of this type of results. First, we prove communication lower bounds for decoding noisy codewords of length m split among k parties. For Reed-Solomon codes of dimension m/k where k = O(1), communication Omega(eta m) - O(log m) is required to decode one message symbol from a codeword with eta m errors, and communication O(eta m log m) suffices. Second, we obtain pseudorandom generators. We can epsilon-fool product tests f: ({0,1}^n)^k -> [-1,1] under any permutation of the bits with seed lengths 2n + O~(k^2 log(1/epsilon)) and O(n) + O~(sqrt{nk log 1/epsilon}). Previous generators have seed lengths >= nk/2 or >= n sqrt{n k}. For the special case where the k bounded functions have range {0,1} the previous generators have seed length >= (n+log k)log(1/epsilon).

Cite as

Elad Haramaty, Chin Ho Lee, and Emanuele Viola. Bounded Independence Plus Noise Fools Products. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 14:1-14:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{haramaty_et_al:LIPIcs.CCC.2017.14,
  author =	{Haramaty, Elad and Lee, Chin Ho and Viola, Emanuele},
  title =	{{Bounded Independence Plus Noise Fools Products}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{14:1--14:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.14},
  URN =		{urn:nbn:de:0030-drops-75188},
  doi =		{10.4230/LIPIcs.CCC.2017.14},
  annote =	{Keywords: ounded independence, Noise, Product tests, Error-correcting codes, Pseudorandomness}
}
Document
Tight Bounds on the Fourier Spectrum of AC0

Authors: Avishay Tal


Abstract
We show that AC^0 circuits on n variables with depth d and size m have at most 2^{-Omega(k/log^{d-1} m)} of their Fourier mass at level k or above. Our proof builds on a previous result by Hastad (SICOMP, 2014) who proved this bound for the special case k=n. Our result improves the seminal result of Linial, Mansour and Nisan (JACM, 1993) and is tight up to the constants hidden in the Omega notation. As an application, we improve Braverman's celebrated result (JACM, 2010). Braverman showed that any r(m,d,epsilon)-wise independent distribution epsilon-fools AC^0 circuits of size m and depth d, for r(m,d,epsilon) = O(log(m/epsilon))^{2d^2+7d+3}. Our improved bounds on the Fourier tails of AC^0 circuits allows us to improve this estimate to r(m,d,epsilon) = O(log(m/epsilon))^{3d+3}. In contrast, an example by Mansour (appearing in Luby and Velickovic's paper - Algorithmica, 1996) shows that there is a log^{d-1}(m)\log(1/epsilon)-wise independent distribution that does not epsilon-fool AC^0 circuits of size m and depth d. Hence, our result is tight up to the factor $3$ in the exponent.

Cite as

Avishay Tal. Tight Bounds on the Fourier Spectrum of AC0. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 15:1-15:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tal:LIPIcs.CCC.2017.15,
  author =	{Tal, Avishay},
  title =	{{Tight Bounds on the Fourier Spectrum of AC0}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{15:1--15:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.15},
  URN =		{urn:nbn:de:0030-drops-75258},
  doi =		{10.4230/LIPIcs.CCC.2017.15},
  annote =	{Keywords: bounded depth circuits, Fourier analysis, k-wise independence, Boolean circuits, switching lemma}
}
Document
Trading Information Complexity for Error

Authors: Yuval Dagan, Yuval Filmus, Hamed Hatami, and Yaqiao Li


Abstract
We consider the standard two-party communication model. The central problem studied in this article is how much can one save in information complexity by allowing a certain error. * For arbitrary functions, we obtain lower bounds and upper bounds indicating a gain that is of order Omega(h(epsilon)) and O(h(sqrt{epsilon})). Here h denotes the binary entropy function. * We analyze the case of the two-bit AND function in detail to show that for this function the gain is Theta(h(epsilon)). This answers a question of Braverman et al. [Braverman, STOC 2013]. * We obtain sharp bounds for the set disjointness function of order n. For the case of the distributional error, we introduce a new protocol that achieves a gain of Theta(sqrt{h(epsilon)}) provided that n is sufficiently large. We apply these results to answer another of question of Braverman et al. regarding the randomized communication complexity of the set disjointness function. * Answering a question of Braverman [Braverman, STOC 2012], we apply our analysis of the set disjointness function to establish a gap between the two different notions of the prior-free information cost. In light of [Braverman, STOC 2012], this implies that amortized randomized communication complexity is not necessarily equal to the amortized distributional communication complexity with respect to the hardest distribution. As a consequence, we show that the epsilon-error randomized communication complexity of the set disjointness function of order n is n[C_{DISJ} - Theta(h(epsilon))] + o(n), where C_{DISJ} ~ 0.4827$ is the constant found by Braverman et al. [Braverman, STOC 2012].

Cite as

Yuval Dagan, Yuval Filmus, Hamed Hatami, and Yaqiao Li. Trading Information Complexity for Error. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 16:1-16:59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dagan_et_al:LIPIcs.CCC.2017.16,
  author =	{Dagan, Yuval and Filmus, Yuval and Hatami, Hamed and Li, Yaqiao},
  title =	{{Trading Information Complexity for Error}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{16:1--16:59},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.16},
  URN =		{urn:nbn:de:0030-drops-75179},
  doi =		{10.4230/LIPIcs.CCC.2017.16},
  annote =	{Keywords: communication complexity, information complexity}
}
Document
Stochasticity in Algorithmic Statistics for Polynomial Time

Authors: Alexey Milovanov and Nikolay Vereshchagin


Abstract
A fundamental notion in Algorithmic Statistics is that of a stochastic object, i.e., an object having a simple plausible explanation. Informally, a probability distribution is a plausible explanation for x if it looks likely that x was drawn at random with respect to that distribution. In this paper, we suggest three definitions of a plausible statistical hypothesis for Algorithmic Statistics with polynomial time bounds, which are called acceptability, plausibility and optimality. Roughly speaking, a probability distribution m is called an acceptable explanation for x, if x possesses all properties decidable by short programs in a short time and shared by almost all objects (with respect to m). Plausibility is a similar notion, however this time we require x to possess all properties T decidable even by long programs in a short time and shared by almost all objects. To compensate the increase in program length, we strengthen the notion of `almost all' - the longer the program recognizing the property is, the more objects must share the property. Finally, a probability distribution m is called an optimal explanation for x if m(x) is large. Almost all our results hold under some plausible complexity theoretic assumptions. Our main result states that for acceptability and plausibility there are infinitely many non-stochastic objects, i.e. objects that do not have simple plausible (acceptable) explanations. Using the same techniques, we show that the distinguishing complexity of a string x can be super-logarithmically less than the conditional complexity of x with condition r for almost all r (for polynomial time bounded programs). Finally, we study relationships between the introduced notions.

Cite as

Alexey Milovanov and Nikolay Vereshchagin. Stochasticity in Algorithmic Statistics for Polynomial Time. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{milovanov_et_al:LIPIcs.CCC.2017.17,
  author =	{Milovanov, Alexey and Vereshchagin, Nikolay},
  title =	{{Stochasticity in Algorithmic Statistics for Polynomial Time}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.17},
  URN =		{urn:nbn:de:0030-drops-75222},
  doi =		{10.4230/LIPIcs.CCC.2017.17},
  annote =	{Keywords: Algorithmic Statistics, Kolmogorov complexity, elusive set, distinguishing complexity}
}
Document
Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness

Authors: Igor C. Carboni Oliveira and Rahul Santhanam


Abstract
We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size <= s(n). We show: Learning Speedups: If C[s(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2^n/n^{\omega(1)}, then for every k >= 1 and epsilon > 0 the class C[n^k] can be learned to high accuracy in time O(2^{n^epsilon}). There is epsilon > 0 such that C[2^{n^{epsilon}}] can be learned in time 2^n/n^{omega(1)} if and only if C[poly(n)] can be learned in time 2^{(log(n))^{O(1)}}. Equivalences between Learning Models: We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness: In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning: If for each k >= 1, (depth-d)-C[n^k] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2^n/n^{\omega(1)}, then for each k >= 1, BPE is not contained in (depth-d)-C[n^k]. If for some epsilon > 0 there are P-natural proofs useful against C[2^{n^{epsilon}}], then ZPEXP is not contained in C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes: If there is a k > 0 such that BPE is contained in i.o.Circuit[n^k], then BPEXP is contained in i.o.EXP/O(log(n)). If ZPEXP is contained in i.o.Circuit[2^{n/3}], then ZPEXP is contained in i.o.ESUBEXP. Hardness Results for MCSP: All functions in non-uniform NC^1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC^0 circuits. In particular, if MCSP is in TC^0 then NC^1 = TC^0.

Cite as

Igor C. Carboni Oliveira and Rahul Santhanam. Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 18:1-18:49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2017.18,
  author =	{Oliveira, Igor C. Carboni and Santhanam, Rahul},
  title =	{{Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{18:1--18:49},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.18},
  URN =		{urn:nbn:de:0030-drops-75327},
  doi =		{10.4230/LIPIcs.CCC.2017.18},
  annote =	{Keywords: boolean circuits, learning theory, pseudorandomness}
}
Document
A Quadratic Lower Bound for Homogeneous Algebraic Branching Programs

Authors: Mrinal Kumar


Abstract
An algebraic branching program (ABP) is a directed acyclic graph, with a start vertex s, and end vertex t and each edge having a weight which is an affine form in variables x_1, x_2, ..., x_n over an underlying field. An ABP computes a polynomial in a natural way, as the sum of weights of all paths from s to t, where the weight of a path is the product of the weights of the edges in the path. An ABP is said to be homogeneous if the polynomial computed at every vertex is homogeneous. In this paper, we show that any homogeneous algebraic branching program which computes the polynomial x_1^n + x_2^n + ... + x_n^n has at least Omega(n^2) vertices (and edges). To the best of our knowledge, this seems to be the first non-trivial super-linear lower bound on the number of vertices for a general homogeneous ABP and slightly improves the known lower bound of Omega(n log n) on the number of edges in a general (possibly non-homogeneous) ABP, which follows from the classical results of Strassen (1973) and Baur--Strassen (1983). On the way, we also get an alternate and unified proof of an Omega(n log n) lower bound on the size of a homogeneous arithmetic circuit (follows from [Strassen, 1973] and [Baur-Strassen, 1983]), and an n/2 lower bound (n over reals) on the determinantal complexity of an explicit polynomial [Mignon-Ressayre, 2004], [Cai, Chen, Li, 2010], [Yabe, 2015]. These are currently the best lower bounds known for these problems for any explicit polynomial, and were originally proved nearly two decades apart using seemingly different proof techniques.

Cite as

Mrinal Kumar. A Quadratic Lower Bound for Homogeneous Algebraic Branching Programs. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kumar:LIPIcs.CCC.2017.19,
  author =	{Kumar, Mrinal},
  title =	{{A Quadratic Lower Bound for Homogeneous Algebraic Branching Programs}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.19},
  URN =		{urn:nbn:de:0030-drops-75134},
  doi =		{10.4230/LIPIcs.CCC.2017.19},
  annote =	{Keywords: algebraic branching programs, arithmetic circuits, determinantal complexity, lower bounds}
}
Document
On Algebraic Branching Programs of Small Width

Authors: Karl Bringmann, Christian Ikenmeyer, and Jeroen Zuiddam


Abstract
In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.

Cite as

Karl Bringmann, Christian Ikenmeyer, and Jeroen Zuiddam. On Algebraic Branching Programs of Small Width. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 20:1-20:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2017.20,
  author =	{Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen},
  title =	{{On Algebraic Branching Programs of Small Width}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{20:1--20:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.20},
  URN =		{urn:nbn:de:0030-drops-75217},
  doi =		{10.4230/LIPIcs.CCC.2017.20},
  annote =	{Keywords: algebraic branching programs, algebraic complexity theory, border complexity, formula size, iterated matrix multiplication}
}
Document
Reconstruction of Full Rank Algebraic Branching Programs

Authors: Neeraj Kayal, Vineet Nair, Chandan Saha, and Sébastien Tavenas


Abstract
An algebraic branching program (ABP) A can be modelled as a product expression X_1 X_2 ... X_d, where X_1 and X_d are 1 x w and w x 1 matrices respectively, and every other X_k is a w x w matrix; the entries of these matrices are linear forms in m variables over a field F (which we assume to be either Q or a field of characteristic poly(m)). The polynomial computed by A is the entry of the 1 x 1 matrix obtained from the product X_1 X_2 ... X_d. We say A is a full rank ABP if the w^2(d-2) + 2w linear forms occurring in the matrices X_1, X_2, ... , X_d are F-linearly independent. Our main result is a randomized reconstruction algorithm for full rank ABPs: Given blackbox access to an m-variate polynomial f of degree at most m, the algorithm outputs a full rank ABP computing f if such an ABP exists, or outputs 'no full rank ABP exists' (with high probability). The running time of the algorithm is polynomial in m and b, where b is the bit length of the coefficients of f. The algorithm works even if X_k is a w_{k-1} x w_k matrix (with w_0 = w_d = 1), and v = (w_1, ..., w_{d-1}) is unknown. The result is obtained by designing a randomized polynomial time equivalence test for the family of iterated matrix multiplication polynomial IMM_{v,d}, the (1,1)-th entry of a product of d rectangular symbolic matrices whose dimensions are according to v in N^{d-1}. At its core, the algorithm exploits a connection between the irreducible invariant subspaces of the Lie algebra of the group of symmetries of a polynomial f that is equivalent to IMM_{v,d} and the 'layer spaces' of a full rank ABP computing f. This connection also helps determine the group of symmetries of IMM_{v,d} and show that IMM_{v,d} is characterized by its group of symmetries.

Cite as

Neeraj Kayal, Vineet Nair, Chandan Saha, and Sébastien Tavenas. Reconstruction of Full Rank Algebraic Branching Programs. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 21:1-21:61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kayal_et_al:LIPIcs.CCC.2017.21,
  author =	{Kayal, Neeraj and Nair, Vineet and Saha, Chandan and Tavenas, S\'{e}bastien},
  title =	{{Reconstruction of Full Rank Algebraic Branching Programs}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{21:1--21:61},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.21},
  URN =		{urn:nbn:de:0030-drops-75318},
  doi =		{10.4230/LIPIcs.CCC.2017.21},
  annote =	{Keywords: Circuit reconstruction, algebraic branching programs, equivalence test, iterated matrix multiplication, Lie algebra}
}
Document
Complexity-Theoretic Foundations of Quantum Supremacy Experiments

Authors: Scott Aaronson and Lijie Chen


Abstract
In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work - for example, on BosonSampling and IQP - the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem - of the form "if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses" - must be non-relativizing. This sharply contrasts with the situation for exact sampling. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities. Fifth, in search of a "happy medium" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to oracles with small circuits.

Cite as

Scott Aaronson and Lijie Chen. Complexity-Theoretic Foundations of Quantum Supremacy Experiments. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 22:1-22:67, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.CCC.2017.22,
  author =	{Aaronson, Scott and Chen, Lijie},
  title =	{{Complexity-Theoretic Foundations of Quantum Supremacy Experiments}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{22:1--22:67},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.22},
  URN =		{urn:nbn:de:0030-drops-75274},
  doi =		{10.4230/LIPIcs.CCC.2017.22},
  annote =	{Keywords: computational complexity, quantum computing, quantum supremacy}
}
Document
Augmented Index and Quantum Streaming Algorithms for DYCK(2)

Authors: Ashwin Nayak and Dave Touchette


Abstract
We show how two recently developed quantum information theoretic tools can be applied to obtain lower bounds on quantum information complexity. We also develop new tools with potential for broader applicability, and use them to establish a lower bound on the quantum information complexity for the Augmented Index function on an easy distribution. This approach allows us to handle superpositions rather than distributions over inputs, the main technical challenge faced previously. By providing a quantum generalization of the argument of Jain and Nayak [IEEE TIT'14], we leverage this to obtain a lower bound on the space complexity of multi-pass, unidirectional quantum streaming algorithms for the DYCK(2) language.

Cite as

Ashwin Nayak and Dave Touchette. Augmented Index and Quantum Streaming Algorithms for DYCK(2). In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nayak_et_al:LIPIcs.CCC.2017.23,
  author =	{Nayak, Ashwin and Touchette, Dave},
  title =	{{Augmented Index and  Quantum Streaming Algorithms for DYCK(2)}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.23},
  URN =		{urn:nbn:de:0030-drops-75437},
  doi =		{10.4230/LIPIcs.CCC.2017.23},
  annote =	{Keywords: Quantum streaming algorithms, Space complexity, Quantum communication complexity, Quantum information cost, DYCK(2), Augmented Index}
}
Document
Separating Quantum Communication and Approximate Rank

Authors: Anurag Anshu, Shalev Ben-David, Ankit Garg, Rahul Jain, Robin Kothari, and Troy Lee


Abstract
One of the best lower bound methods for the quantum communication complexity of a function H (with or without shared entanglement) is the logarithm of the approximate rank of the communication matrix of H. This measure is essentially equivalent to the approximate gamma-2 norm and generalized discrepancy, and subsumes several other lower bounds. All known lower bounds on quantum communication complexity in the general unbounded-round model can be shown via the logarithm of approximate rank, and it was an open problem to give any separation at all between quantum communication complexity and the logarithm of the approximate rank. In this work we provide the first such separation: We exhibit a total function H with quantum communication complexity almost quadratically larger than the logarithm of its approximate rank. We construct H using the communication lookup function framework of Anshu et al. (FOCS 2016) based on the cheat sheet framework of Aaronson et al. (STOC 2016). From a starting function F, this framework defines a new function H=F_G. Our main technical result is a lower bound on the quantum communication complexity of F_G in terms of the discrepancy of F, which we do via quantum information theoretic arguments. We show the upper bound on the approximate rank of F_G by relating it to the Boolean circuit size of the starting function F.

Cite as

Anurag Anshu, Shalev Ben-David, Ankit Garg, Rahul Jain, Robin Kothari, and Troy Lee. Separating Quantum Communication and Approximate Rank. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 24:1-24:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{anshu_et_al:LIPIcs.CCC.2017.24,
  author =	{Anshu, Anurag and Ben-David, Shalev and Garg, Ankit and Jain, Rahul and Kothari, Robin and Lee, Troy},
  title =	{{Separating Quantum Communication and Approximate Rank}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{24:1--24:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.24},
  URN =		{urn:nbn:de:0030-drops-75303},
  doi =		{10.4230/LIPIcs.CCC.2017.24},
  annote =	{Keywords: Communication Complexity, Quantum Computing, Lower Bounds, logrank, Quantum Information}
}
Document
Optimal Quantum Sample Complexity of Learning Algorithms

Authors: Srinivasan Arunachalam and Ronald de Wolf


Abstract
In learning theory, the VC dimension of a concept class C is the most common way to measure its "richness". A fundamental result says that the number of examples needed to learn an unknown target concept c in C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model Theta(d/eps + log(1/delta)/eps) examples are necessary and sufficient for a learner to output, with probability 1-delta, a hypothesis h that is eps-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c in C, we know that Theta(d/eps^2 + log(1/delta)/eps^2) examples are necessary and sufficient to output an hypothesis h in C whose error is at most eps worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is Omega(d^{1-eta}/eps + d + log(1/delta)/eps) for arbitrarily small constant eta>0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/eps) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the "Pretty Good Measurement" on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C.

Cite as

Srinivasan Arunachalam and Ronald de Wolf. Optimal Quantum Sample Complexity of Learning Algorithms. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 25:1-25:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.CCC.2017.25,
  author =	{Arunachalam, Srinivasan and de Wolf, Ronald},
  title =	{{Optimal Quantum Sample Complexity of Learning Algorithms}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{25:1--25:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.25},
  URN =		{urn:nbn:de:0030-drops-75241},
  doi =		{10.4230/LIPIcs.CCC.2017.25},
  annote =	{Keywords: Quantum computing, PAC learning, agnostic learning, VC dimension}
}
Document
Settling the Query Complexity of Non-Adaptive Junta Testing

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie


Abstract
We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f is a k-junta or epsilon-far from every k-junta must make ~Omega(k^{3/2}/ epsilon) many queries for a wide range of parameters k and epsilon. Our result dramatically improves previous lower bounds from [BGSMdW13,STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes ~O(k^{3/2})/epsilon queries. Combined with the adaptive tester of [Blais09] which makes O(k log k + k / epsilon) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

Cite as

Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie. Settling the Query Complexity of Non-Adaptive Junta Testing. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2017.26,
  author =	{Chen, Xi and Servedio, Rocco A. and Tan, Li-Yang and Waingarten, Erik and Xie, Jinyu},
  title =	{{Settling the Query Complexity of Non-Adaptive Junta Testing}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.26},
  URN =		{urn:nbn:de:0030-drops-75283},
  doi =		{10.4230/LIPIcs.CCC.2017.26},
  annote =	{Keywords: property testing, juntas, query complexity}
}
Document
An Adaptivity Hierarchy Theorem for Property Testing

Authors: Clément L. Canonne and Tom Gur


Abstract
Adaptivity is known to play a crucial role in property testing. In particular, there exist properties for which there is an exponential gap between the power of adaptive testing algorithms, wherein each query may be determined by the answers received to prior queries, and their non-adaptive counterparts, in which all queries are independent of answers obtained from previous queries. In this work, we investigate the role of adaptivity in property testing at a finer level. We first quantify the degree of adaptivity of a testing algorithm by considering the number of "rounds of adaptivity" it uses. More accurately, we say that a tester is k-(round) adaptive if it makes queries in k+1 rounds, where the queries in the i'th round may depend on the answers obtained in the previous i-1 rounds. Then, we ask the following question: Does the power of testing algorithms smoothly grow with the number of rounds of adaptivity? We provide a positive answer to the foregoing question by proving an adaptivity hierarchy theorem for property testing. Specifically, our main result shows that for every n in N and 0 <= k <= n^{0.99} there exists a property Pi_{n,k} of functions for which (1) there exists a k-adaptive tester for Pi_{n,k} with query complexity tilde O(k), yet (2) any (k-1)-adaptive tester for Pi_{n,k} must make Omega(n) queries. In addition, we show that such a qualitative adaptivity hierarchy can be witnessed for testing natural properties of graphs.

Cite as

Clément L. Canonne and Tom Gur. An Adaptivity Hierarchy Theorem for Property Testing. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 27:1-27:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.CCC.2017.27,
  author =	{Canonne, Cl\'{e}ment L. and Gur, Tom},
  title =	{{An Adaptivity Hierarchy Theorem for Property Testing}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{27:1--27:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.27},
  URN =		{urn:nbn:de:0030-drops-75164},
  doi =		{10.4230/LIPIcs.CCC.2017.27},
  annote =	{Keywords: Property Testing, Coding Theory, Hierarchy Theorems}
}
Document
Distribution Testing Lower Bounds via Reductions from Communication Complexity

Authors: Eric Blais, Clément L. Canonne, and Tom Gur


Abstract
We present a new methodology for proving distribution testing lower bounds, establishing a connection between distribution testing and the simultaneous message passing (SMP) communication model. Extending the framework of Blais, Brody, and Matulef (Computational Complexity, 2012), we show a simple way to reduce (private-coin) SMP problems to distribution testing problems. This method allows us to prove new distribution testing lower bounds, as well as to provide simple proofs of known lower bounds. Our main result is concerned with testing identity to a specific distribution p, given as a parameter. In a recent and influential work, Valiant and Valiant (FOCS, 2014) showed that the sample complexity of the aforementioned problem is closely related to the 2/3-quasinorm of p. We obtain alternative bounds on the complexity of this problem in terms of an arguably more intuitive measure and using simpler proofs. More specifically, we prove that the sample complexity is essentially determined by a fundamental operator in the theory of interpolation of Banach spaces, known as Peetre's K-functional. We show that this quantity is closely related to the size of the effective support of p (loosely speaking, the number of supported elements that constitute the vast majority of the mass of p). This result, in turn, stems from an unexpected connection to functional analysis and refined concentration of measure inequalities, which arise naturally in our reduction.

Cite as

Eric Blais, Clément L. Canonne, and Tom Gur. Distribution Testing Lower Bounds via Reductions from Communication Complexity. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 28:1-28:40, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{blais_et_al:LIPIcs.CCC.2017.28,
  author =	{Blais, Eric and Canonne, Cl\'{e}ment L. and Gur, Tom},
  title =	{{Distribution Testing Lower Bounds via Reductions from Communication Complexity}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{28:1--28:40},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.28},
  URN =		{urn:nbn:de:0030-drops-75366},
  doi =		{10.4230/LIPIcs.CCC.2017.28},
  annote =	{Keywords: Distribution testing, communication complexity, lower bounds, simultaneous message passing, functional analysis}
}
Document
Exponentially Small Soundness for the Direct Product Z-Test

Authors: Irit Dinur and Inbal Livni Navon


Abstract
Given a function f:[N]^k->[M]^k, the Z-test is a three query test for checking if a function f is a direct product, namely if there are functions g_1,...g_k:[N]->[M] such that f(x_1,...,x_k)=(g_1(x_1),...,g_k(x_k)) for every input x in [N]^k. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability epsilon > exp(-sqrt k) then f is Omega(epsilon) close to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to exp(-k) (which would be optimal). This is our main result: we show that whenever f passes the Z test with probability epsilon > exp(-k), there must be a global reason for this: namely, f must be close to a product function on some Omega(epsilon) fraction of its domain. Towards proving our result we analyze the related (two-query) V-test, and prove a "restricted global structure" theorem for it. Such theorems were also proven in previous works on direct product testing in the small soundness regime. The most recent work, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen their conclusion of that theorem by moving from an "in expectation" statement to a stronger "concentration of measure" type of statement, which we prove using hyper-contractivity. This stronger statement allows us to proceed to analyze the Z test. We analyze two variants of direct product tests. One for functions on ordered tuples, as above, and another for functions on sets of size k. The work of Impagliazzo et al. was actually focused only on functions of the latter type, i.e. on sets. We prove exponentially small soundness for the Z-test for both variants. Although the two appear very similar, the analysis for tuples is more tricky and requires some additional ideas.

Cite as

Irit Dinur and Inbal Livni Navon. Exponentially Small Soundness for the Direct Product Z-Test. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 29:1-29:50, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.CCC.2017.29,
  author =	{Dinur, Irit and Livni Navon, Inbal},
  title =	{{Exponentially Small Soundness for the Direct Product Z-Test}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{29:1--29:50},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.29},
  URN =		{urn:nbn:de:0030-drops-75336},
  doi =		{10.4230/LIPIcs.CCC.2017.29},
  annote =	{Keywords: Direct Product Testing, Property Testing, Agreement}
}
Document
On the Polynomial Parity Argument Complexity of the Combinatorial Nullstellensatz

Authors: Aleksandrs Belovs, Gábor Ivanyos, Youming Qiao, Miklos Santha, and Siyi Yang


Abstract
The complexity class PPA consists of NP-search problems which are reducible to the parity principle in undirected graphs. It contains a wide variety of interesting problems from graph theory, combinatorics, algebra and number theory, but only a few of these are known to be complete in the class. Before this work, the known complete problems were all discretizations or combinatorial analogues of topological fixed point theorems. Here we prove the PPA-completeness of two problems of radically different style. They are PPA-Circuit CNSS and PPA-Circuit Chevalley, related respectively to the Combinatorial Nullstellensatz and to the Chevalley-Warning Theorem over the two elements field GF(2). The input of these problems contain PPA-circuits which are arithmetic circuits with special symmetric properties that assure that the polynomials computed by them have always an even number of zeros. In the proof of the result we relate the multilinear degree of the polynomials to the parity of the maximal parse subcircuits that compute monomials with maximal multilinear degree, and we show that the maximal parse subcircuits of a PPA-circuit can be paired in polynomial time.

Cite as

Aleksandrs Belovs, Gábor Ivanyos, Youming Qiao, Miklos Santha, and Siyi Yang. On the Polynomial Parity Argument Complexity of the Combinatorial Nullstellensatz. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 30:1-30:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{belovs_et_al:LIPIcs.CCC.2017.30,
  author =	{Belovs, Aleksandrs and Ivanyos, G\'{a}bor and Qiao, Youming and Santha, Miklos and Yang, Siyi},
  title =	{{On the Polynomial Parity Argument Complexity of the Combinatorial Nullstellensatz}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{30:1--30:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.30},
  URN =		{urn:nbn:de:0030-drops-75260},
  doi =		{10.4230/LIPIcs.CCC.2017.30},
  annote =	{Keywords: Chevalley-Warning Theorem, Combinatorail Nullstellensatz, Polynomial Parity Argument, arithmetic circuit}
}
Document
An Exponential Lower Bound for Homogeneous Depth-5 Circuits over Finite Fields

Authors: Mrinal Kumar and Ramprasad Saptharishi


Abstract
In this paper, we show exponential lower bounds for the class of homogeneous depth-5 circuits over all small finite fields. More formally, we show that there is an explicit family {P_d} of polynomials in VNP, where P_d is of degree d in n = d^{O(1)} variables, such that over all finite fields GF(q), any homogeneous depth-5 circuit which computes P_d must have size at least exp(Omega_q(sqrt{d})). To the best of our knowledge, this is the first super-polynomial lower bound for this class for any non-binary field. Our proof builds up on the ideas developed on the way to proving lower bounds for homogeneous depth-4 circuits [Gupta et al., Fournier et al., Kayal et al., Kumar-Saraf] and for non-homogeneous depth-3 circuits over finite fields [Grigoriev-Karpinski, Grigoriev-Razborov]. Our key insight is to look at the space of shifted partial derivatives of a polynomial as a space of functions from GF(q)^n to GF(q) as opposed to looking at them as a space of formal polynomials and builds over a tighter analysis of the lower bound of Kumar and Saraf [Kumar-Saraf].

Cite as

Mrinal Kumar and Ramprasad Saptharishi. An Exponential Lower Bound for Homogeneous Depth-5 Circuits over Finite Fields. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 31:1-31:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CCC.2017.31,
  author =	{Kumar, Mrinal and Saptharishi, Ramprasad},
  title =	{{An Exponential Lower Bound for Homogeneous Depth-5 Circuits over Finite Fields}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{31:1--31:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.31},
  URN =		{urn:nbn:de:0030-drops-75142},
  doi =		{10.4230/LIPIcs.CCC.2017.31},
  annote =	{Keywords: arithmetic circuits, lower bounds, separations, depth reduction}
}
Document
Complete Derandomization of Identity Testing and Reconstruction of Read-Once Formulas

Authors: Daniel Minahan and Ilya Volkovich


Abstract
In this paper we study the identity testing problem of arithmetic read-once formulas (ROF) and some related models. A read-once formula is formula (a circuit whose underlying graph is a tree) in which the operations are {+,x} and such that every input variable labels at most one leaf. We obtain the first polynomial-time deterministic identity testing algorithm that operates in the black-box setting for read-once formulas, as well as some other related models. As an application, we obtain the first polynomial-time deterministic reconstruction algorithm for such formulas. Our results are obtained by improving and extending the analysis of the algorithm of [Shpilka-Volkovich, 2015]

Cite as

Daniel Minahan and Ilya Volkovich. Complete Derandomization of Identity Testing and Reconstruction of Read-Once Formulas. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 32:1-32:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{minahan_et_al:LIPIcs.CCC.2017.32,
  author =	{Minahan, Daniel and Volkovich, Ilya},
  title =	{{Complete Derandomization of Identity Testing and Reconstruction of Read-Once Formulas}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{32:1--32:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.32},
  URN =		{urn:nbn:de:0030-drops-75128},
  doi =		{10.4230/LIPIcs.CCC.2017.32},
  annote =	{Keywords: Derandomization, Read-Once Formulas, Identity Testing, Arithmetic Circuits, Reconstruction}
}
Document
Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces

Authors: Markus Blaeser, Gorav Jindal, and Anurag Pandey


Abstract
We consider the problem of commutative rank computation of a given matrix space. A matrix space is a (linear) subspace of the (linear) space of n x n matrices over a given field. The problem is fundamental, as it generalizes several computational problems from algebra and combinatorics. For instance, checking if the commutative rank of the space is n, subsumes problems such as testing perfect matching in graphs and identity testing of algebraic branching programs. An efficient deterministic computation of the commutative rank is a major open problem, although there is a simple and efficient randomized algorithm for it. Recently, there has been a series of results on computing the non-commutative rank of matrix spaces in deterministic polynomial time. Since the non-commutative rank of any matrix space is at most twice the commutative rank, one immediately gets a deterministic 1/2-approximation algorithm for the computation of the commutative rank. This leads to a natural question of whether this approximation ratio can be improved. In this paper, we answer this question affirmatively. We present a deterministic Polynomial-time approximation scheme (PTAS) for computing the commutative rank of a given matrix space B. More specifically, given a matrix space and a rational number e > 0, we give an algorithm, that runs in time O(n^(4 + 3/e)) and computes a matrix A in the given matrix space B such that the rank of A is at least (1-e) times the commutative rank of B. The algorithm is the natural greedy algorithm. It always takes the first set of k matrices that will increase the rank of the matrix constructed so far until it does not find any improvement, where the size of the set k depends on e.

Cite as

Markus Blaeser, Gorav Jindal, and Anurag Pandey. Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 33:1-33:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{blaeser_et_al:LIPIcs.CCC.2017.33,
  author =	{Blaeser, Markus and Jindal, Gorav and Pandey, Anurag},
  title =	{{Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{33:1--33:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.33},
  URN =		{urn:nbn:de:0030-drops-75194},
  doi =		{10.4230/LIPIcs.CCC.2017.33},
  annote =	{Keywords: Commutative Rank, Matrix Spaces, PTAS, Wong sequences}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail