LIPIcs, Volume 102

33rd Computational Complexity Conference (CCC 2018)



Thumbnail PDF

Publication Details

  • published at: 2018-06-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-069-9
  • DBLP: db/conf/coco/coco2018

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 102, CCC'18, Complete Volume

Authors: Rocco A. Servedio


Abstract
LIPIcs, Volume 102, CCC'18, Complete Volume

Cite as

33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{servedio:LIPIcs.CCC.2018,
  title =	{{LIPIcs, Volume 102, CCC'18, Complete Volume}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018},
  URN =		{urn:nbn:de:0030-drops-89338},
  doi =		{10.4230/LIPIcs.CCC.2018},
  annote =	{Keywords: Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Rocco A. Servedio


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 0:i-0:xi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{servedio:LIPIcs.CCC.2018.0,
  author =	{Servedio, Rocco A.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{0:i--0:xi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.0},
  URN =		{urn:nbn:de:0030-drops-88609},
  doi =		{10.4230/LIPIcs.CCC.2018.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Pseudorandom Generators from Polarizing Random Walks

Authors: Eshan Chattopadhyay, Pooya Hatami, Kaave Hosseini, and Shachar Lovett


Abstract
We propose a new framework for constructing pseudorandom generators for n-variate Boolean functions. It is based on two new notions. First, we introduce fractional pseudorandom generators, which are pseudorandom distributions taking values in [-1,1]^n. Next, we use a fractional pseudorandom generator as steps of a random walk in [-1,1]^n that converges to {-1,1}^n. We prove that this random walk converges fast (in time logarithmic in n) due to polarization. As an application, we construct pseudorandom generators for Boolean functions with bounded Fourier tails. We use this to obtain a pseudorandom generator for functions with sensitivity s, whose seed length is polynomial in s. Other examples include functions computed by branching programs of various sorts or by bounded depth circuits.

Cite as

Eshan Chattopadhyay, Pooya Hatami, Kaave Hosseini, and Shachar Lovett. Pseudorandom Generators from Polarizing Random Walks. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 1:1-1:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2018.1,
  author =	{Chattopadhyay, Eshan and Hatami, Pooya and Hosseini, Kaave and Lovett, Shachar},
  title =	{{Pseudorandom Generators from Polarizing Random Walks}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{1:1--1:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.1},
  URN =		{urn:nbn:de:0030-drops-88880},
  doi =		{10.4230/LIPIcs.CCC.2018.1},
  annote =	{Keywords: AC0, branching program, polarization, pseudorandom generators, random walks, Sensitivity}
}
Document
A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n

Authors: Daniel Kane and Sankeerth Rao


Abstract
We construct and analyze a pseudorandom generator for degree 2 boolean polynomial threshold functions. Random constructions achieve the optimal seed length of O(log n + log 1/epsilon), however the best known explicit construction of [Ilias Diakonikolas, 2010] uses a seed length of O(log n * epsilon^{-8}). In this work we give an explicit construction that uses a seed length of O(log n + (1/epsilon)^{o(1)}). Note that this improves the seed length substantially and that the dependence on the error epsilon is additive and only grows subpolynomially as opposed to the previously known multiplicative polynomial dependence. Our generator uses dimensionality reduction on a Nisan-Wigderson based pseudorandom generator given by Lu, Kabanets [Kabanets and Lu, 2018].

Cite as

Daniel Kane and Sankeerth Rao. A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 2:1-2:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kane_et_al:LIPIcs.CCC.2018.2,
  author =	{Kane, Daniel and Rao, Sankeerth},
  title =	{{A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{2:1--2:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.2},
  URN =		{urn:nbn:de:0030-drops-88861},
  doi =		{10.4230/LIPIcs.CCC.2018.2},
  annote =	{Keywords: Pseudorandomness, Polynomial Threshold Functions}
}
Document
A New Approach for Constructing Low-Error, Two-Source Extractors

Authors: Avraham Ben-Aroya, Eshan Chattopadhyay, Dean Doron, Xin Li, and Amnon Ta-Shma


Abstract
Our main contribution in this paper is a new reduction from explicit two-source extractors for polynomially-small entropy rate and negligible error to explicit t-non-malleable extractors with seed-length that has a good dependence on t. Our reduction is based on the Chattopadhyay and Zuckerman framework (STOC 2016), and surprisingly we dispense with the use of resilient functions which appeared to be a major ingredient there and in follow-up works. The use of resilient functions posed a fundamental barrier towards achieving negligible error, and our new reduction circumvents this bottleneck. The parameters we require from t-non-malleable extractors for our reduction to work hold in a non-explicit construction, but currently it is not known how to explicitly construct such extractors. As a result we do not give an unconditional construction of an explicit low-error two-source extractor. Nonetheless, we believe our work gives a viable approach for solving the important problem of low-error two-source extractors. Furthermore, our work highlights an existing barrier in constructing low-error two-source extractors, and draws attention to the dependence of the parameter t in the seed-length of the non-malleable extractor. We hope this work would lead to further developments in explicit constructions of both non-malleable and two-source extractors.

Cite as

Avraham Ben-Aroya, Eshan Chattopadhyay, Dean Doron, Xin Li, and Amnon Ta-Shma. A New Approach for Constructing Low-Error, Two-Source Extractors. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{benaroya_et_al:LIPIcs.CCC.2018.3,
  author =	{Ben-Aroya, Avraham and Chattopadhyay, Eshan and Doron, Dean and Li, Xin and Ta-Shma, Amnon},
  title =	{{A New Approach for Constructing Low-Error, Two-Source Extractors}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.3},
  URN =		{urn:nbn:de:0030-drops-88877},
  doi =		{10.4230/LIPIcs.CCC.2018.3},
  annote =	{Keywords: Two-Source Extractors, Non-Malleable Extractors, Pseudorandomness, Explicit Constructions}
}
Document
Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs

Authors: Venkatesan Guruswami, Nicolas Resch, and Chaoping Xing


Abstract
For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following: - Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n). - Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.

Cite as

Venkatesan Guruswami, Nicolas Resch, and Chaoping Xing. Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2018.4,
  author =	{Guruswami, Venkatesan and Resch, Nicolas and Xing, Chaoping},
  title =	{{Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.4},
  URN =		{urn:nbn:de:0030-drops-88859},
  doi =		{10.4230/LIPIcs.CCC.2018.4},
  annote =	{Keywords: Algebraic constructions, coding theory, linear algebra, list-decoding, polynomial method, pseudorandomness}
}
Document
NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits

Authors: Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam


Abstract
The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds. The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [William J. Masek, 1979] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD_2. Our techniques extend to an NP-hardness result for MOD_m gates at the bottom layer under inputs from (Z / m Z)^n.

Cite as

Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam. NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 5:1-5:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2018.5,
  author =	{Hirahara, Shuichi and Oliveira, Igor C. and Santhanam, Rahul},
  title =	{{NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{5:1--5:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.5},
  URN =		{urn:nbn:de:0030-drops-88831},
  doi =		{10.4230/LIPIcs.CCC.2018.5},
  annote =	{Keywords: NP-hardness, Minimum Circuit Size Problem, depth-3 circuits}
}
Document
Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials

Authors: Richard Ryan Williams


Abstract
We consider the problem of representing Boolean functions exactly by "sparse" linear combinations (over R) of functions from some "simple" class C. In particular, given C we are interested in finding low-complexity functions lacking sparse representations. When C forms a basis for the space of Boolean functions (e.g., the set of PARITY functions or the set of conjunctions) this sort of problem has a well-understood answer; the problem becomes interesting when C is "overcomplete" and the set of functions is not linearly independent. We focus on the cases where C is the set of linear threshold functions, the set of rectified linear units (ReLUs), and the set of low-degree polynomials over a finite field, all of which are well-studied in different contexts. We provide generic tools for proving lower bounds on representations of this kind. Applying these, we give several new lower bounds for "semi-explicit" Boolean functions. Let alpha(n) be an unbounded function such that n^{alpha(n)} is time constructible (e.g. alpha(n) = log^*(n)). We show: - Functions in NTIME[n^{alpha(n)}] that require super-polynomially many linear threshold functions to represent (depth-two neural networks with sign activation function, a special case of depth-two threshold circuit lower bounds). - Functions in NTIME[n^{alpha(n)}] that require super-polynomially many ReLU gates to represent (depth-two neural networks with ReLU activation function). - Functions in NTIME[n^{alpha(n)}] that require super-polynomially many O(1)-degree F_p-polynomials to represent exactly, for every prime p (related to problems regarding Higher-Order "Uncertainty Principles"). We also obtain a function in E^{NP} requiring 2^{Omega(n)} linear combinations. - Functions in NTIME[n^{poly(log n)}] that require super-polynomially many ACC ° THR circuits to represent exactly (further generalizing the recent lower bounds of Murray and the author). We also obtain "fixed-polynomial" lower bounds for functions in NP, for the first three representation classes. All our lower bounds are obtained via algorithms for analyzing linear combinations of simple functions in the above scenarios, in ways which substantially beat exhaustive search.

Cite as

Richard Ryan Williams. Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 6:1-6:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.CCC.2018.6,
  author =	{Williams, Richard Ryan},
  title =	{{Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{6:1--6:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.6},
  URN =		{urn:nbn:de:0030-drops-88846},
  doi =		{10.4230/LIPIcs.CCC.2018.6},
  annote =	{Keywords: linear threshold functions, lower bounds, neural networks, low-degree polynomials}
}
Document
The Power of Natural Properties as Oracles

Authors: Russell Impagliazzo, Valentine Kabanets, and Ilya Volkovich


Abstract
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions.

Cite as

Russell Impagliazzo, Valentine Kabanets, and Ilya Volkovich. The Power of Natural Properties as Oracles. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{impagliazzo_et_al:LIPIcs.CCC.2018.7,
  author =	{Impagliazzo, Russell and Kabanets, Valentine and Volkovich, Ilya},
  title =	{{The Power of Natural Properties as Oracles}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.7},
  URN =		{urn:nbn:de:0030-drops-88824},
  doi =		{10.4230/LIPIcs.CCC.2018.7},
  annote =	{Keywords: natural properties, Minimal Circuit Size Problem (MCSP), circuit lower bounds, hardness of MCSP, learning algorithms, obfuscation, Indistinguishability Obfuscators (IO)}
}
Document
Linear Sketching over F_2

Authors: Sampath Kannan, Elchanan Mossel, Swagato Sanyal, and Grigory Yaroslavtsev


Abstract
We initiate a systematic study of linear sketching over F_2. For a given Boolean function treated as f : F_2^n -> F_2 a randomized F_2-sketch is a distribution M over d x n matrices with elements over F_2 such that Mx suffices for computing f(x) with high probability. Such sketches for d << n can be used to design small-space distributed and streaming algorithms. Motivated by these applications we study a connection between F_2-sketching and a two-player one-way communication game for the corresponding XOR-function. We conjecture that F_2-sketching is optimal for this communication game. Our results confirm this conjecture for multiple important classes of functions: 1) low-degree F_2-polynomials, 2) functions with sparse Fourier spectrum, 3) most symmetric functions, 4) recursive majority function. These results rely on a new structural theorem that shows that F_2-sketching is optimal (up to constant factors) for uniformly distributed inputs. Furthermore, we show that (non-uniform) streaming algorithms that have to process random updates over F_2 can be constructed as F_2-sketches for the uniform distribution. In contrast with the previous work of Li, Nguyen and Woodruff (STOC'14) who show an analogous result for linear sketches over integers in the adversarial setting our result does not require the stream length to be triply exponential in n and holds for streams of length O(n) constructed through uniformly random updates.

Cite as

Sampath Kannan, Elchanan Mossel, Swagato Sanyal, and Grigory Yaroslavtsev. Linear Sketching over F_2. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 8:1-8:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kannan_et_al:LIPIcs.CCC.2018.8,
  author =	{Kannan, Sampath and Mossel, Elchanan and Sanyal, Swagato and Yaroslavtsev, Grigory},
  title =	{{Linear Sketching over F\underline2}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{8:1--8:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.8},
  URN =		{urn:nbn:de:0030-drops-88819},
  doi =		{10.4230/LIPIcs.CCC.2018.8},
  annote =	{Keywords: Linear sketch, Streaming algorithms, XOR-functions, Communication complexity}
}
Document
Communication Complexity with Small Advantage

Authors: Thomas Watson


Abstract
We study problems in randomized communication complexity when the protocol is only required to attain some small advantage over purely random guessing, i.e., it produces the correct output with probability at least epsilon greater than one over the codomain size of the function. Previously, Braverman and Moitra (STOC 2013) showed that the set-intersection function requires Theta(epsilon n) communication to achieve advantage epsilon. Building on this, we prove the same bound for several variants of set-intersection: (1) the classic "tribes" function obtained by composing with And (provided 1/epsilon is at most the width of the And), and (2) the variant where the sets are uniquely intersecting and the goal is to determine partial information about (say, certain bits of the index of) the intersecting coordinate.

Cite as

Thomas Watson. Communication Complexity with Small Advantage. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{watson:LIPIcs.CCC.2018.9,
  author =	{Watson, Thomas},
  title =	{{Communication Complexity with Small Advantage}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.9},
  URN =		{urn:nbn:de:0030-drops-88805},
  doi =		{10.4230/LIPIcs.CCC.2018.9},
  annote =	{Keywords: Communication, complexity, small, advantage}
}
Document
Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity

Authors: Zeyu Guo, Nitin Saxena, and Amit Sinhababu


Abstract
Testing whether a set f of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS'07). Previously, the best complexity known was NP^{#P} (Mittmann, Saxena, Scheiblechner, Trans.AMS'14). In this work we put the problem in AM cap coAM. In particular, dependence testing is unlikely to be NP-hard and joins the league of problems of "intermediate" complexity, eg. graph isomorphism & integer factoring. Our proof method is algebro-geometric- estimating the size of the image/preimage of the polynomial map f over the finite field. A gap in this size is utilized in the AM protocols. Next, we study the open question of testing whether every annihilator of f has zero constant term (Kayal, CCC'09). We give a geometric characterization using Zariski closure of the image of f; introducing a new problem called approximate polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Gröbner basis computation). As an unexpected application of this to approximative complexity theory we get- over any field, hitting-sets for overline{VP} can be verified in PSPACE. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity).

Cite as

Zeyu Guo, Nitin Saxena, and Amit Sinhababu. Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.CCC.2018.10,
  author =	{Guo, Zeyu and Saxena, Nitin and Sinhababu, Amit},
  title =	{{Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.10},
  URN =		{urn:nbn:de:0030-drops-88786},
  doi =		{10.4230/LIPIcs.CCC.2018.10},
  annote =	{Keywords: algebraic dependence, Jacobian, Arthur-Merlin, approximate polynomial, satisfiability, hitting-set, border VP, finite field, PSPACE, EXPSPACE, GCT Chasm}
}
Document
Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits

Authors: Noga Alon, Mrinal Kumar, and Ben Lee Volk


Abstract
We prove a lower bound of Omega(n^2/log^2 n) on the size of any syntactically multilinear arithmetic circuit computing some explicit multilinear polynomial f(x_1, ..., x_n). Our approach expands and improves upon a result of Raz, Shpilka and Yehudayoff ([Ran Raz et al., 2008]), who proved a lower bound of Omega(n^{4/3}/log^2 n) for the same polynomial. Our improvement follows from an asymptotically optimal lower bound for a generalized version of Galvin's problem in extremal set theory.

Cite as

Noga Alon, Mrinal Kumar, and Ben Lee Volk. Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{alon_et_al:LIPIcs.CCC.2018.11,
  author =	{Alon, Noga and Kumar, Mrinal and Volk, Ben Lee},
  title =	{{Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.11},
  URN =		{urn:nbn:de:0030-drops-88799},
  doi =		{10.4230/LIPIcs.CCC.2018.11},
  annote =	{Keywords: Algebraic Complexity, Multilinear Circuits, Circuit Lower Bounds}
}
Document
Hardness Amplification for Non-Commutative Arithmetic Circuits

Authors: Marco L. Carmosino, Russell Impagliazzo, Shachar Lovett, and Ivan Mihajlin


Abstract
We show that proving mildly super-linear lower bounds on non-commutative arithmetic circuits implies exponential lower bounds on non-commutative circuits. That is, non-commutative circuit complexity is a threshold phenomenon: an apparently weak lower bound actually suffices to show the strongest lower bounds we could desire. This is part of a recent line of inquiry into why arithmetic circuit complexity, despite being a heavily restricted version of Boolean complexity, still cannot prove super-linear lower bounds on general devices. One can view our work as positive news (it suffices to prove weak lower bounds to get strong ones) or negative news (it is as hard to prove weak lower bounds as it is to prove strong ones). We leave it to the reader to determine their own level of optimism.

Cite as

Marco L. Carmosino, Russell Impagliazzo, Shachar Lovett, and Ivan Mihajlin. Hardness Amplification for Non-Commutative Arithmetic Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{carmosino_et_al:LIPIcs.CCC.2018.12,
  author =	{Carmosino, Marco L. and Impagliazzo, Russell and Lovett, Shachar and Mihajlin, Ivan},
  title =	{{Hardness Amplification for Non-Commutative Arithmetic Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.12},
  URN =		{urn:nbn:de:0030-drops-88772},
  doi =		{10.4230/LIPIcs.CCC.2018.12},
  annote =	{Keywords: arithmetic circuits, hardness amplification, circuit lower bounds, non-commutative computation}
}
Document
Hardness vs Randomness for Bounded Depth Arithmetic Circuits

Authors: Chi-Ning Chou, Mrinal Kumar, and Noam Solomon


Abstract
In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5. This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree. The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large.

Cite as

Chi-Ning Chou, Mrinal Kumar, and Noam Solomon. Hardness vs Randomness for Bounded Depth Arithmetic Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chou_et_al:LIPIcs.CCC.2018.13,
  author =	{Chou, Chi-Ning and Kumar, Mrinal and Solomon, Noam},
  title =	{{Hardness vs Randomness for Bounded Depth Arithmetic Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.13},
  URN =		{urn:nbn:de:0030-drops-88765},
  doi =		{10.4230/LIPIcs.CCC.2018.13},
  annote =	{Keywords: Algebraic Complexity, Polynomial Factorization Circuit Lower Bounds, Polynomial Identity Testing}
}
Document
On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product

Authors: Lijie Chen


Abstract
In this paper we study the (Bichromatic) Maximum Inner Product Problem (Max-IP), in which we are given sets A and B of vectors, and the goal is to find a in A and b in B maximizing inner product a * b. Max-IP is very basic and serves as the base problem in the recent breakthrough of [Abboud et al., FOCS 2017] on hardness of approximation for polynomial-time problems. It is also used (implicitly) in the argument for hardness of exact l_2-Furthest Pair (and other important problems in computational geometry) in poly-log-log dimensions in [Williams, SODA 2018]. We have three main results regarding this problem. - Characterization of Multiplicative Approximation. First, we study the best multiplicative approximation ratio for Boolean Max-IP in sub-quadratic time. We show that, for Max-IP with two sets of n vectors from {0,1}^{d}, there is an n^{2 - Omega(1)} time (d/log n)^{Omega(1)}-multiplicative-approximating algorithm, and we show this is conditionally optimal, as such a (d/log n)^{o(1)}-approximating algorithm would refute SETH. Similar characterization is also achieved for additive approximation for Max-IP. - 2^{O(log^* n)}-dimensional Hardness for Exact Max-IP Over The Integers. Second, we revisit the hardness of solving Max-IP exactly for vectors with integer entries. We show that, under SETH, for Max-IP with sets of n vectors from Z^{d} for some d = 2^{O(log^* n)}, every exact algorithm requires n^{2 - o(1)} time. With the reduction from [Williams, SODA 2018], it follows that l_2-Furthest Pair and Bichromatic l_2-Closest Pair in 2^{O(log^* n)} dimensions require n^{2 - o(1)} time. - Connection with NP * UPP Communication Protocols. Last, We establish a connection between conditional lower bounds for exact Max-IP with integer entries and NP * UPP communication protocols for Set-Disjointness, parallel to the connection between conditional lower bounds for approximating Max-IP and MA communication protocols for Set-Disjointness. The lower bound in our first result is a direct corollary of the new MA protocol for Set-Disjointness introduced in [Rubinstein, STOC 2018], and our algorithms utilize the polynomial method and simple random sampling. Our second result follows from a new dimensionality self reduction from the Orthogonal Vectors problem for n vectors from {0,1}^{d} to n vectors from Z^{l} where l = 2^{O(log^* d)}, dramatically improving the previous reduction in [Williams, SODA 2018]. The key technical ingredient is a recursive application of Chinese Remainder Theorem. As a side product, we obtain an MA communication protocol for Set-Disjointness with complexity O (sqrt{n log n log log n}), slightly improving the O (sqrt{n} log n) bound [Aaronson and Wigderson, TOCT 2009], and approaching the Omega(sqrt{n}) lower bound [Klauck, CCC 2003]. Moreover, we show that (under SETH) one can apply the O(sqrt{n}) BQP communication protocol for Set-Disjointness to prove near-optimal hardness for approximation to Max-IP with vectors in {-1,1}^d. This answers a question from [Abboud et al., FOCS 2017] in the affirmative.

Cite as

Lijie Chen. On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 14:1-14:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.CCC.2018.14,
  author =	{Chen, Lijie},
  title =	{{On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{14:1--14:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.14},
  URN =		{urn:nbn:de:0030-drops-88752},
  doi =		{10.4230/LIPIcs.CCC.2018.14},
  annote =	{Keywords: Maximum Inner Product, SETH, Hardness of Approximation in P, Fined-Grained Complexity, Hopcroft's Problem, Chinese Remainder Theorem}
}
Document
Hardness of Function Composition for Semantic Read once Branching Programs

Authors: Jeff Edmonds, Venkatesh Medabalimi, and Toniann Pitassi


Abstract
In this work, we study time/space trade-offs for function composition. We prove asymptotically optimal lower bounds for function composition in the setting of nondeterministic read once branching programs, for the syntactic model as well as the stronger semantic model of read-once nondeterministic computation. We prove that such branching programs for solving the tree evaluation problem over an alphabet of size k requires size roughly k^{Omega(h)}, i.e space Omega(h log k). Our lower bound nearly matches the natural upper bound which follows the best strategy for black-white pebbling the underlying tree. While previous super-polynomial lower bounds have been proven for read-once nondeterministic branching programs (for both the syntactic as well as the semantic models), we give the first lower bounds for iterated function composition, and in these models our lower bounds are near optimal.

Cite as

Jeff Edmonds, Venkatesh Medabalimi, and Toniann Pitassi. Hardness of Function Composition for Semantic Read once Branching Programs. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 15:1-15:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{edmonds_et_al:LIPIcs.CCC.2018.15,
  author =	{Edmonds, Jeff and Medabalimi, Venkatesh and Pitassi, Toniann},
  title =	{{Hardness of Function Composition for Semantic Read once Branching Programs}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{15:1--15:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.15},
  URN =		{urn:nbn:de:0030-drops-88747},
  doi =		{10.4230/LIPIcs.CCC.2018.15},
  annote =	{Keywords: Branching Programs, Function Composition, Time-Space Tradeoffs, Semantic Read Once, Tree Evaluation Problem}
}
Document
Reordering Rule Makes OBDD Proof Systems Stronger

Authors: Sam Buss, Dmitry Itsykson, Alexander Knop, and Dmitry Sokolov


Abstract
Atserias, Kolaitis, and Vardi showed that the proof system of Ordered Binary Decision Diagrams with conjunction and weakening, OBDD(^, weakening), simulates CP^* (Cutting Planes with unary coefficients). We show that OBDD(^, weakening) can give exponentially shorter proofs than dag-like cutting planes. This is proved by showing that the Clique-Coloring tautologies have polynomial size proofs in the OBDD(^, weakening) system. The reordering rule allows changing the variable order for OBDDs. We show that OBDD(^, weakening, reordering) is strictly stronger than OBDD(^, weakening). This is proved using the Clique-Coloring tautologies, and by transforming tautologies using coded permutations and orification. We also give CNF formulas which have polynomial size OBDD(^) proofs but require superpolynomial (actually, quasipolynomial size) resolution proofs, and thus we partially resolve an open question proposed by Groote and Zantema. Applying dag-like and tree-like lifting techniques to the mentioned results, we completely analyze which of the systems among CP^*, OBDD(^), OBDD(^, reordering), OBDD(^, weakening) and OBDD(^, weakening, reordering) polynomially simulate each other. For dag-like proof systems, some of our separations are quasipolynomial and some are exponential; for tree-like systems, all of our separations are exponential.

Cite as

Sam Buss, Dmitry Itsykson, Alexander Knop, and Dmitry Sokolov. Reordering Rule Makes OBDD Proof Systems Stronger. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 16:1-16:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{buss_et_al:LIPIcs.CCC.2018.16,
  author =	{Buss, Sam and Itsykson, Dmitry and Knop, Alexander and Sokolov, Dmitry},
  title =	{{Reordering Rule Makes OBDD Proof Systems Stronger}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{16:1--16:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.16},
  URN =		{urn:nbn:de:0030-drops-88720},
  doi =		{10.4230/LIPIcs.CCC.2018.16},
  annote =	{Keywords: Proof complexity, OBDD, Tseitin formulas, the Clique-Coloring principle, lifting theorems}
}
Document
Testing Linearity against Non-Signaling Strategies

Authors: Alessandro Chiesa, Peter Manohar, and Igor Shinkar


Abstract
Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in Physics as a strict generalization of quantum strategies to understand the power and limitations of Nature's apparent non-locality. Recently, they have received attention in Theoretical Computer Science due to connections to Complexity and Cryptography. We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions. Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that "local views" follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of Quantum Mechanics as a tool to describe various non-local phenomena. Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.

Cite as

Alessandro Chiesa, Peter Manohar, and Igor Shinkar. Testing Linearity against Non-Signaling Strategies. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 17:1-17:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.CCC.2018.17,
  author =	{Chiesa, Alessandro and Manohar, Peter and Shinkar, Igor},
  title =	{{Testing Linearity against Non-Signaling Strategies}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{17:1--17:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.17},
  URN =		{urn:nbn:de:0030-drops-88731},
  doi =		{10.4230/LIPIcs.CCC.2018.17},
  annote =	{Keywords: property testing, linearity testing, non-signaling strategies, quasi-distributions}
}
Document
Earthmover Resilience and Testing in Ordered Structures

Authors: Omri Ben-Eliezer and Eldar Fischer


Abstract
One of the main challenges in property testing is to characterize those properties that are testable with a constant number of queries. For unordered structures such as graphs and hypergraphs this task has been mostly settled. However, for ordered structures such as strings, images, and ordered graphs, the characterization problem seems very difficult in general. In this paper, we identify a wide class of properties of ordered structures - the earthmover resilient (ER) properties - and show that the "good behavior" of such properties allows us to obtain general testability results that are similar to (and more general than) those of unordered graphs. A property P is ER if, roughly speaking, slight changes in the order of the elements in an object satisfying P cannot make this object far from P. The class of ER properties includes, e.g., all unordered graph properties, many natural visual properties of images, such as convexity, and all hereditary properties of ordered graphs and images. A special case of our results implies, building on a recent result of Alon and the authors, that the distance of a given image or ordered graph from any hereditary property can be estimated (with good probability) up to a constant additive error, using a constant number of queries.

Cite as

Omri Ben-Eliezer and Eldar Fischer. Earthmover Resilience and Testing in Ordered Structures. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 18:1-18:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.CCC.2018.18,
  author =	{Ben-Eliezer, Omri and Fischer, Eldar},
  title =	{{Earthmover Resilience and Testing in Ordered Structures}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{18:1--18:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.18},
  URN =		{urn:nbn:de:0030-drops-88718},
  doi =		{10.4230/LIPIcs.CCC.2018.18},
  annote =	{Keywords: characterizations of testability, distance estimation, earthmover resilient, ordered structures, property testing}
}
Document
New Hardness Results for the Permanent Using Linear Optics

Authors: Daniel Grier and Luke Schaeffer


Abstract
In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is #P-hard. Aaronson's proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant's seminal proof of the same fact in 1979. Nevertheless, it did not show #P-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of new results about matrix permanents that are derived primarily via these linear optical techniques. First, we show that the problem of computing the permanent of a real orthogonal matrix is #P-hard. Much like Aaronson's original proof, this implies that even a multiplicative approximation remains #P-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field F_{p^4} for p != 2, 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. Finally, we use more elementary arguments to prove #P-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer.

Cite as

Daniel Grier and Luke Schaeffer. New Hardness Results for the Permanent Using Linear Optics. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 19:1-19:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grier_et_al:LIPIcs.CCC.2018.19,
  author =	{Grier, Daniel and Schaeffer, Luke},
  title =	{{New Hardness Results for the Permanent Using Linear Optics}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{19:1--19:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.19},
  URN =		{urn:nbn:de:0030-drops-88702},
  doi =		{10.4230/LIPIcs.CCC.2018.19},
  annote =	{Keywords: Permanent, Linear optics, #P-hardness, Orthogonal matrices}
}
Document
Retracted: Two-Player Entangled Games are NP-Hard

Authors: Anand Natarajan and Thomas Vidick


Abstract
The article, published on June 4th, 2018 in the CCC 2018 proceedings, has been retracted by agreement between the authors, the editor(s), and the publisher Schloss Dagstuhl / LIPIcs. The retraction has been agreed due to an error in the proof of the main result. This error is carried over from an error in the referenced paper “Three-player entangled XOR games are NP-hard to approximate” by Thomas Vidick (SICOMP ’16). That paper was used in an essential way to obtain the present result, and the error cannot be addressed through an erratum. See Retraction Notice on the last page of the PDF. We show that it is NP-hard to approximate, to within an additive constant, the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length. As a corollary, the inclusion NEXP subseteq MIP^*, first shown by Ito and Vidick (FOCS'12) with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test of Raz and Safra (STOC'97) against two entangled provers.

Cite as

Anand Natarajan and Thomas Vidick. Retracted: Two-Player Entangled Games are NP-Hard. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:LIPIcs.CCC.2018.20,
  author =	{Natarajan, Anand and Vidick, Thomas},
  title =	{{Retracted: Two-Player Entangled Games are NP-Hard}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.20},
  URN =		{urn:nbn:de:0030-drops-88696},
  doi =		{10.4230/LIPIcs.CCC.2018.20},
  annote =	{Keywords: low-degree testing, entangled nonlocal games, multi-prover interactive proof systems}
}
Document
Complexity Classification of Conjugated Clifford Circuits

Authors: Adam Bouland, Joseph F. Fitzsimons, and Dax Enshan Koh


Abstract
Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and pi/4 phase gates - play a central role in the study of quantum computation. However, their computational power is limited: a well-known result of Gottesman and Knill states that Clifford circuits are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be efficiently classically simulated to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, by standard techniques, this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture. This work can be seen as progress towards classifying the computational power of all restricted quantum gate sets.

Cite as

Adam Bouland, Joseph F. Fitzsimons, and Dax Enshan Koh. Complexity Classification of Conjugated Clifford Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 21:1-21:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.CCC.2018.21,
  author =	{Bouland, Adam and Fitzsimons, Joseph F. and Koh, Dax Enshan},
  title =	{{Complexity Classification of Conjugated Clifford Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{21:1--21:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.21},
  URN =		{urn:nbn:de:0030-drops-88677},
  doi =		{10.4230/LIPIcs.CCC.2018.21},
  annote =	{Keywords: gate set classification, quantum advantage, sampling problems, polynomial hierarchy}
}
Document
Efficient Batch Verification for UP

Authors: Omer Reingold, Guy N. Rothblum, and Ron D. Rothblum


Abstract
Consider a setting in which a prover wants to convince a verifier of the correctness of k NP statements. For example, the prover wants to convince the verifier that k given integers N_1,...,N_k are all RSA moduli (i.e., products of equal length primes). Clearly this problem can be solved by simply having the prover send the k NP witnesses, but this involves a lot of communication. Can interaction help? In particular, is it possible to construct interactive proofs for this task whose communication grows sub-linearly with k? Our main result is such an interactive proof for verifying the correctness of any k UP statements (i.e., NP statements that have a unique witness). The proof-system uses only a constant number of rounds and the communication complexity is k^delta * poly(m), where delta>0 is an arbitrarily small constant, m is the length of a single witness, and the poly term refers to a fixed polynomial that only depends on the language and not on delta. The (honest) prover strategy can be implemented in polynomial-time given access to the k (unique) witnesses. Our proof leverages "interactive witness verification" (IWV), a new type of proof-system that may be of independent interest. An IWV is a proof-system in which the verifier needs to verify the correctness of an NP statement using: (i) a sublinear number of queries to an alleged NP witness, and (ii) a short interaction with a powerful but untrusted prover. In contrast to the setting of PCPs and Interactive PCPs, here the verifier only has access to the raw NP witness, rather than some encoding thereof.

Cite as

Omer Reingold, Guy N. Rothblum, and Ron D. Rothblum. Efficient Batch Verification for UP. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{reingold_et_al:LIPIcs.CCC.2018.22,
  author =	{Reingold, Omer and Rothblum, Guy N. and Rothblum, Ron D.},
  title =	{{Efficient Batch Verification for UP}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{22:1--22:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.22},
  URN =		{urn:nbn:de:0030-drops-88681},
  doi =		{10.4230/LIPIcs.CCC.2018.22},
  annote =	{Keywords: Interactive Proof, Batch Verification, Unique Solution}
}
Document
A Tight Lower Bound for Entropy Flattening

Authors: Yi-Hsiu Chen, Mika Göös, Salil P. Vadhan, and Jiapeng Zhang


Abstract
We study entropy flattening: Given a circuit C_X implicitly describing an n-bit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries. Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zero-knowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan, 1999]. The Theta(n^2) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.

Cite as

Yi-Hsiu Chen, Mika Göös, Salil P. Vadhan, and Jiapeng Zhang. A Tight Lower Bound for Entropy Flattening. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 23:1-23:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2018.23,
  author =	{Chen, Yi-Hsiu and G\"{o}\"{o}s, Mika and Vadhan, Salil P. and Zhang, Jiapeng},
  title =	{{A Tight Lower Bound for Entropy Flattening}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{23:1--23:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.23},
  URN =		{urn:nbn:de:0030-drops-88669},
  doi =		{10.4230/LIPIcs.CCC.2018.23},
  annote =	{Keywords: Entropy, One-way function}
}
Document
Worst-Case to Average Case Reductions for the Distance to a Code

Authors: Eli Ben-Sasson, Swastik Kopparty, and Shubhangi Saraf


Abstract
Algebraic proof systems reduce computational problems to problems about estimating the distance of a sequence of functions vec{u}=(u_1,..., u_k), given as oracles, from a linear error correcting code V. The soundness of such systems relies on methods that act "locally" on vec{u} and map it to a single function u^* that is, roughly, as far from V as are u_1,..., u_k. Motivated by these applications to efficient proof systems, we study a natural worst-case to average-case reduction of distance for linear spaces, and show several general cases in which the following statement holds: If some member of a linear space U=span(u_1,...,u_k) is delta-far from (all elements) of V in relative Hamming distance, then nearly all elements of U are (1-epsilon)delta-far from V; the value of epsilon depends only on the distance of the code V and approaches 0 as that distance approaches 1. Our results improve on the previous state-of-the-art which showed that nearly all elements of U are 1/2delta-far from V [Rothblum, Vadhan and Wigderson, STOC 2013]. When V is a Reed-Solomon (RS) code, as is often the case for algebraic proof systems, we show how to boost distance via a new "local" transformation that may be useful elsewhere. Relying on the affine-invariance of V, we map a vector u to a random linear combination of affine transformations of u, and show this process amplifies distance from V. Assuming V is an RS code with sufficiently large distance, this amplification process converts a function u that is somewhat far from V to one that is (1-epsilon)-far from V; as above, epsilon depends only on the distance of V and approaches 0 as the distance of V approaches 1. We give two concrete application of these techniques. First, we revisit the axis-parallel low-degree test for bivariate polynomials of [Polischuk-Spielman, STOC 1994] and prove a "list-decoding" type result for it, when the degree of one axis is extremely small. This result is similar to the recent list-decoding-regime result of [Chiesa, Manohar and Shinkar, RANDOM 2017] but is proved using different techniques, and allows the degree in one axis to be arbitrarily large. Second, we improve the soundness analysis of the recent RS proximity testing protocol of [Ben-Sasson et al., ICALP 2018] and extend it to the "list-decoding" regime, bringing it closer to the Johnson bound.

Cite as

Eli Ben-Sasson, Swastik Kopparty, and Shubhangi Saraf. Worst-Case to Average Case Reductions for the Distance to a Code. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 24:1-24:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.CCC.2018.24,
  author =	{Ben-Sasson, Eli and Kopparty, Swastik and Saraf, Shubhangi},
  title =	{{Worst-Case to Average Case Reductions for the Distance to a Code}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{24:1--24:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.24},
  URN =		{urn:nbn:de:0030-drops-88654},
  doi =		{10.4230/LIPIcs.CCC.2018.24},
  annote =	{Keywords: Proximity testing, Reed-Solomon codes, algebraic coding complexity}
}
Document
On the Complexity of the Cayley Semigroup Membership Problem

Authors: Lukas Fleischer


Abstract
We investigate the complexity of deciding, given a multiplication table representing a semigroup S, a subset X of S and an element t of S, whether t can be expressed as a product of elements of X. It is well-known that this problem is {NL}-complete and that the more general Cayley groupoid membership problem, where the multiplication table is not required to be associative, is {P}-complete. For groups, the problem can be solved in deterministic log-space which raised the question of determining the exact complexity of this variant. Barrington, Kadau, Lange and McKenzie showed that for Abelian groups and for certain solvable groups, the problem is contained in the complexity class {FOLL} and they concluded that these variants are not hard for any complexity class containing {Parity}. The more general case of arbitrary groups remained open. In this work, we show that for both groups and for commutative semigroups, the problem is solvable in {qAC}^0 (quasi-polynomial size circuits of constant depth with unbounded fan-in) and conclude that these variants are also not hard for any class containing {Parity}. Moreover, we prove that {NL}-completeness already holds for the classes of 0-simple semigroups and nilpotent semigroups. Together with our results on groups and commutative semigroups, we prove the existence of a natural class of finite semigroups which generates a variety of finite semigroups with {NL}-complete Cayley semigroup membership, while the Cayley semigroup membership problem for the class itself is not {NL}-hard. We also discuss applications of our technique to {FOLL}.

Cite as

Lukas Fleischer. On the Complexity of the Cayley Semigroup Membership Problem. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 25:1-25:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fleischer:LIPIcs.CCC.2018.25,
  author =	{Fleischer, Lukas},
  title =	{{On the Complexity of the Cayley Semigroup Membership Problem}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{25:1--25:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.25},
  URN =		{urn:nbn:de:0030-drops-88649},
  doi =		{10.4230/LIPIcs.CCC.2018.25},
  annote =	{Keywords: subsemigroup, multiplication table, generators, completeness, quasi-polynomial-size circuits, FOLL}
}
Document
Small Normalized Boolean Circuits for Semi-disjoint Bilinear Forms Require Logarithmic Conjunction-depth

Authors: Andrzej Lingas


Abstract
We consider normalized Boolean circuits that use binary operations of disjunction and conjunction, and unary negation, with the restriction that negation can be only applied to input variables. We derive a lower bound trade-off between the size of normalized Boolean circuits computing Boolean semi-disjoint bilinear forms and their conjunction-depth (i.e., the maximum number of and-gates on a directed path to an output gate). In particular, we show that any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n-dimensional Boolean vector convolution has Omega(n^{2-4 epsilon}) and-gates. Analogously, any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n x n Boolean matrix product has Omega(n^{3-4 epsilon}) and-gates. We complete our lower-bound trade-offs with upper-bound trade-offs of similar form yielded by the known fast algebraic algorithms.

Cite as

Andrzej Lingas. Small Normalized Boolean Circuits for Semi-disjoint Bilinear Forms Require Logarithmic Conjunction-depth. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 26:1-26:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lingas:LIPIcs.CCC.2018.26,
  author =	{Lingas, Andrzej},
  title =	{{Small Normalized Boolean Circuits for Semi-disjoint Bilinear Forms Require Logarithmic Conjunction-depth}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{26:1--26:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.26},
  URN =		{urn:nbn:de:0030-drops-88630},
  doi =		{10.4230/LIPIcs.CCC.2018.26},
  annote =	{Keywords: Boolean circuits, semi-disjoint bilinear form, Boolean vector convolution, Boolean matrix product}
}
Document
Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao


Abstract
We prove new cell-probe lower bounds for dynamic data structures that maintain a subset of {1,2,...,n}, and compute various statistics of the set. The data structure is said to handle insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. For any such data structure that can compute the median of the set, we prove that: t_{med} >= Omega(n^{1/(t_{ins}+1)}/(w^2 * t_{ins}^2)), where t_{ins} is the number of memory locations accessed during insertions, t_{med} is the number of memory locations accessed to compute the median, and w is the number of bits stored in each memory location. When the data structure is able to perform deletions non-adaptively and compute the minimum non-adaptively, we prove t_{min} + t_{del} >= Omega(log n /(log w + log log n)), where t_{min} is the number of locations accessed to compute the minimum, and t_{del} is the number of locations accessed to perform deletions. For the predecessor search problem, where the data structure is required to compute the predecessor of any element in the set, we prove that if computing the predecessors can be done non-adaptively, then either t_{pred} >= Omega(log n/(log log n + log w)), or t_{ins} >= Omega(n^{1/(2(t_{pred}+1))}), where t_{pred} is the number of locations accessed to compute predecessors. These bounds are nearly matched by Binary Search Trees in some range of parameters. Our results follow from using the Sunflower Lemma of Erdös and Rado [Paul Erdös and Richard Rado, 1960] together with several kinds of encoding arguments.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao. Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.CCC.2018.27,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup},
  title =	{{Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.27},
  URN =		{urn:nbn:de:0030-drops-88625},
  doi =		{10.4230/LIPIcs.CCC.2018.27},
  annote =	{Keywords: Non-adaptive data structures, Sunflower lemma}
}
Document
Dimension Reduction for Polynomials over Gaussian Space and Applications

Authors: Badih Ghazi, Pritish Kamath, and Prasad Raghavendra


Abstract
We introduce a new technique for reducing the dimension of the ambient space of low-degree polynomials in the Gaussian space while preserving their relative correlation structure. As an application, we obtain an explicit upper bound on the dimension of an epsilon-optimal noise-stable Gaussian partition. In fact, we address the more general problem of upper bounding the number of samples needed to epsilon-approximate any joint distribution that can be non-interactively simulated from a correlated Gaussian source. Our results significantly improve (from Ackermann-like to "merely" exponential) the upper bounds recently proved on the above problems by De, Mossel & Neeman [CCC 2017, SODA 2018 resp.] and imply decidability of the larger alphabet case of the gap non-interactive simulation problem posed by Ghazi, Kamath & Sudan [FOCS 2016]. Our technique of dimension reduction for low-degree polynomials is simple and can be seen as a generalization of the Johnson-Lindenstrauss lemma and could be of independent interest.

Cite as

Badih Ghazi, Pritish Kamath, and Prasad Raghavendra. Dimension Reduction for Polynomials over Gaussian Space and Applications. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 28:1-28:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.CCC.2018.28,
  author =	{Ghazi, Badih and Kamath, Pritish and Raghavendra, Prasad},
  title =	{{Dimension Reduction for Polynomials over Gaussian Space and Applications}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{28:1--28:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.28},
  URN =		{urn:nbn:de:0030-drops-88616},
  doi =		{10.4230/LIPIcs.CCC.2018.28},
  annote =	{Keywords: Dimension reduction, Low-degree Polynomials, Noise Stability, Non-Interactive Simulation}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail