LIPIcs, Volume 107

45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)



Thumbnail PDF

Event

ICALP 2018, July 9-13, 2018, Prague, Czech Republic

Editors

Ioannis Chatzigiannakis
Christos Kaklamanis
Dániel Marx
Donald Sannella

Publication Details

  • published at: 2018-07-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-076-7
  • DBLP: db/conf/icalp/icalp2018

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 107, ICALP'18, Complete Volume

Authors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella


Abstract
LIPIcs, Volume 107, ICALP'18, Complete Volume

Cite as

45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2018,
  title =	{{LIPIcs, Volume 107, ICALP'18, Complete Volume}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018},
  URN =		{urn:nbn:de:0030-drops-92803},
  doi =		{10.4230/LIPIcs.ICALP.2018},
  annote =	{Keywords: Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 0:i-0:xlviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chatzigiannakis_et_al:LIPIcs.ICALP.2018.0,
  author =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{0:i--0:xlviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.0},
  URN =		{urn:nbn:de:0030-drops-90049},
  doi =		{10.4230/LIPIcs.ICALP.2018.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Paper
Consistent Distributed Memory Services: Resilience and Efficiency (Invited Paper)

Authors: Theophanis Hadjistasi and Alexander A. Schwarzmann


Abstract
Reading, 'Riting, and 'Rithmetic, the three R's underlying much of human intellectual activity, not surprisingly, also stand as a venerable foundation of modern computing technology. Indeed, both the Turing machine and von Neumann machine models operate by reading, writing, and computing, and all practical uniprocessor implementations are based on performing activities structured in terms of the three R's. With the advance of networking technology, communication became an additional major systemic activity. However, at a high level of abstraction, it is apparently still more natural to think in terms of reading, writing, and computing. While it is hard to imagine distributed systems - such as those implementing the World-Wide Web - without communication, we often imagine browser-based applications that operate by retrieving (i.e., reading) data, performing computation, and storing (i.e., writing) the results. In this article, we deal with the storage of shared readable and writable data in distributed systems that are subject to perturbations in the underlying distributed platforms composed of computers and networks that interconnect them. The perturbations may include permanent failures (or crashes) of individual computers, transient failures, and delays in the communication medium. The focus of this paper is on the implementations of distributed atomic memory services. Atomicity is a venerable notion of consistency, introduced in 1979 by Lamport [Lamport, 1979]. To this day atomicity remains the most natural type of consistency because it provides an illusion of equivalence with the serial object type that software designers expect. We define the overall setting, models of computation, definition of atomic consistency, and measures of efficiency. We then present algorithms for single-writer settings in the static models. Then we move to presenting algorithms for multi-writer settings. For both static settings we discuss design issues, correctness, efficiency, and trade-offs. Lastly we survey the implementation issues in dynamic settings, where the universe of participants may completely change over time. Here the expectation is that solutions are found by integrating static algorithms with a reconfiguration framework so that during periods of relative stability one benefits from the efficiency of static algorithms, and where during the more turbulent times performance degrades gracefully when reconfigurations are needed. We describe the most important approaches and provide examples.

Cite as

Theophanis Hadjistasi and Alexander A. Schwarzmann. Consistent Distributed Memory Services: Resilience and Efficiency (Invited Paper). In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hadjistasi_et_al:LIPIcs.ICALP.2018.1,
  author =	{Hadjistasi, Theophanis and Schwarzmann, Alexander A.},
  title =	{{Consistent Distributed Memory Services: Resilience and Efficiency}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.1},
  URN =		{urn:nbn:de:0030-drops-90050},
  doi =		{10.4230/LIPIcs.ICALP.2018.1},
  annote =	{Keywords: Atomicity, shared-memory, read/write objects, fault-tolerance, latency}
}
Document
Invited Paper
Sparsity - an Algorithmic Perspective (Invited Paper)

Authors: Jaroslav Nesetril


Abstract
It is a well known experience that for sparse structures one can find fast algorithm for some problems which seem to be otherwise complex. The recently developed theory of sparse classes of graphs (and structures) formalizes this. Particularly the dichotomy Nowhere vs Somewhere Dense presents a very robust tool to study and design algorithms and algorithmic metatheorems. This dichotomy can be characterized in many different ways leading tp broad applications. We survey some of the recent highlights. This is a joint work with Patrice Ossona de Mendez (EHESS Paris and Charles University Prague).

Cite as

Jaroslav Nesetril. Sparsity - an Algorithmic Perspective (Invited Paper). In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nesetril:LIPIcs.ICALP.2018.2,
  author =	{Nesetril, Jaroslav},
  title =	{{Sparsity - an Algorithmic Perspective}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.2},
  URN =		{urn:nbn:de:0030-drops-90062},
  doi =		{10.4230/LIPIcs.ICALP.2018.2},
  annote =	{Keywords: sparse structures, algorithm design}
}
Document
Invited Paper
Probability Theory from a Programming Perspective (Invited Paper)

Authors: Sam Staton


Abstract
A leading idea is to apply techniques from verification and programming theory to machine learning and statistics, to deal with things like compositionality and various notions of correctness and complexity. Probabilistic programming is an example of this. Moreover, this approach leads to new foundational methods in probability theory. This is particularly true in the "non-parametric" aspects, for example in higher-order functions and infinite random graph models.

Cite as

Sam Staton. Probability Theory from a Programming Perspective (Invited Paper). In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{staton:LIPIcs.ICALP.2018.3,
  author =	{Staton, Sam},
  title =	{{Probability Theory from a Programming Perspective}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.3},
  URN =		{urn:nbn:de:0030-drops-90073},
  doi =		{10.4230/LIPIcs.ICALP.2018.3},
  annote =	{Keywords: correctness, complexity, statistics}
}
Document
Invited Paper
Lower Bounds by Algorithm Design: A Progress Report (Invited Paper)

Authors: Richard Ryan Williams


Abstract
In 2010, the author proposed a program for proving lower bounds in circuit complexity, via faster algorithms for circuit satisfiability and related problems. This talk will give an overview of how the program works, report on the successes of this program so far, and outline open frontiers that have yet to be resolved.

Cite as

Richard Ryan Williams. Lower Bounds by Algorithm Design: A Progress Report (Invited Paper). In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{williams:LIPIcs.ICALP.2018.4,
  author =	{Williams, Richard Ryan},
  title =	{{Lower Bounds by Algorithm Design: A Progress Report}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.4},
  URN =		{urn:nbn:de:0030-drops-90088},
  doi =		{10.4230/LIPIcs.ICALP.2018.4},
  annote =	{Keywords: circuit complexity, satisfiability, derandomization}
}
Document
Power of d Choices with Simple Tabulation

Authors: Anders Aamand, Mathias Bæk Tejs Knudsen, and Mikkel Thorup


Abstract
We consider the classic d-choice paradigm of Azar et al. [STOC'94] in which m balls are put into n bins sequentially as follows: For each ball we are given a choice of d bins chosen according to d hash functions and the ball is placed in the least loaded of these bins, breaking ties arbitrarily. The interest is in the number of balls in the fullest bin after all balls have been placed. In this paper we suppose that the d hash functions are simple tabulation hash functions which are easy to implement and can be evaluated in constant time. Generalising a result by Dahlgaard et al. [SODA'16] we show that for an arbitrary constant d >= 2 the expected maximum load is at most (lg lg n)/(lg d) + O(1). We further show that by using a simple tie-breaking algorithm introduced by Vöcking [J.ACM'03] the expected maximum load is reduced to (lg lg n)/(d lg phi_d) + O(1) where phi_d is the rate of growth of the d-ary Fibonacci numbers. Both of these expected bounds match those known from the fully random setting. The analysis by Dahlgaard et al. relies on a proof by Patrascu and Thorup [J.ACM'11] concerning the use of simple tabulation for cuckoo hashing. We require a generalisation to d>2 hash functions, but the original proof is an 8-page tour de force of ad-hoc arguments that do not appear to generalise. Our main technical contribution is a shorter, simpler and more accessible proof of the result by Patrascu and Thorup, where the relevant parts generalise nicely to the analysis of d choices.

Cite as

Anders Aamand, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. Power of d Choices with Simple Tabulation. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aamand_et_al:LIPIcs.ICALP.2018.5,
  author =	{Aamand, Anders and B{\ae}k Tejs Knudsen, Mathias and Thorup, Mikkel},
  title =	{{Power of d Choices with Simple Tabulation}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.5},
  URN =		{urn:nbn:de:0030-drops-90096},
  doi =		{10.4230/LIPIcs.ICALP.2018.5},
  annote =	{Keywords: Hashing, Load Balancing, Balls and Bins, Simple Tabulation}
}
Document
One-Way Trail Orientations

Authors: Anders Aamand, Niklas Hjuler, Jacob Holm, and Eva Rotenberg


Abstract
Given a graph, does there exist an orientation of the edges such that the resulting directed graph is strongly connected? Robbins' theorem [Robbins, Am. Math. Monthly, 1939] asserts that such an orientation exists if and only if the graph is 2-edge connected. A natural extension of this problem is the following: Suppose that the edges of the graph are partitioned into trails. Can the trails be oriented consistently such that the resulting directed graph is strongly connected? We show that 2-edge connectivity is again a sufficient condition and we provide a linear time algorithm for finding such an orientation. The generalised Robbins' theorem [Boesch, Am. Math. Monthly, 1980] for mixed multigraphs asserts that the undirected edges of a mixed multigraph can be oriented to make the resulting directed graph strongly connected exactly when the mixed graph is strongly connected and the underlying graph is bridgeless. We consider the natural extension where the undirected edges of a mixed multigraph are partitioned into trails. It turns out that in this case the condition of the generalised Robbin's Theorem is not sufficient. However, we show that as long as each cut either contains at least 2 undirected edges or directed edges in both directions, there exists an orientation of the trails such that the resulting directed graph is strongly connected. Moreover, if the condition is satisfied, we may start by orienting an arbitrary trail in an arbitrary direction. Using this result one obtains a very simple polynomial time algorithm for finding a strong trail orientation if it exists, both in the undirected and the mixed setting.

Cite as

Anders Aamand, Niklas Hjuler, Jacob Holm, and Eva Rotenberg. One-Way Trail Orientations. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aamand_et_al:LIPIcs.ICALP.2018.6,
  author =	{Aamand, Anders and Hjuler, Niklas and Holm, Jacob and Rotenberg, Eva},
  title =	{{One-Way Trail Orientations}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.6},
  URN =		{urn:nbn:de:0030-drops-90109},
  doi =		{10.4230/LIPIcs.ICALP.2018.6},
  annote =	{Keywords: Graph algorithms, Robbins' theorem, Graph orientation}
}
Document
Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms

Authors: Moab Arar, Shiri Chechik, Sarel Cohen, Cliff Stein, and David Wajc


Abstract
We present a simple randomized reduction from fully-dynamic integral matching algorithms to fully-dynamic "approximately-maximal" fractional matching algorithms. Applying this reduction to the recent fractional matching algorithm of Bhattacharya, Henzinger, and Nanongkai (SODA 2017), we obtain a novel result for the integral problem. Specifically, our main result is a randomized fully-dynamic (2+epsilon)-approximate integral matching algorithm with small polylog worst-case update time. For the (2+epsilon)-approximation regime only a fractional fully-dynamic (2+epsilon)-matching algorithm with worst-case polylog update time was previously known, due to Bhattacharya et al. (SODA 2017). Our algorithm is the first algorithm that maintains approximate matchings with worst-case update time better than polynomial, for any constant approximation ratio. As a consequence, we also obtain the first constant-approximate worst-case polylogarithmic update time maximum weight matching algorithm.

Cite as

Moab Arar, Shiri Chechik, Sarel Cohen, Cliff Stein, and David Wajc. Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{arar_et_al:LIPIcs.ICALP.2018.7,
  author =	{Arar, Moab and Chechik, Shiri and Cohen, Sarel and Stein, Cliff and Wajc, David},
  title =	{{Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.7},
  URN =		{urn:nbn:de:0030-drops-90112},
  doi =		{10.4230/LIPIcs.ICALP.2018.7},
  annote =	{Keywords: Dynamic, Worst-case, Maximum Matching, Maximum Weight Matching}
}
Document
Tighter Connections Between Formula-SAT and Shaving Logs

Authors: Amir Abboud and Karl Bringmann


Abstract
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the {O}(n^2) dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to O(n^2/log^{2}n) in several ways and using a variety of ingenious tricks. This line of research, also known as the art of shaving log factors, lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time O(n^2/log^3n)? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on boolean formulas (Formula-SAT) faster than exhaustive search. They show that an O(n^2/log^{1000} n) algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fréchet distance problem from Computational Geometry, we show that an O(n^2/log^{7+epsilon}{n}) runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size s over n variables to LCS on sequences of length N=2^{n/2} * s^{1+o(1)}. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with N=2^{n/2} * s^c, for some c >= 100.

Cite as

Amir Abboud and Karl Bringmann. Tighter Connections Between Formula-SAT and Shaving Logs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2018.8,
  author =	{Abboud, Amir and Bringmann, Karl},
  title =	{{Tighter Connections Between Formula-SAT and Shaving Logs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.8},
  URN =		{urn:nbn:de:0030-drops-90129},
  doi =		{10.4230/LIPIcs.ICALP.2018.8},
  annote =	{Keywords: Fine-Grained Complexity, Hardness in P, Formula-SAT, Longest Common Subsequence, Frechet Distance}
}
Document
New Approximation Algorithms for (1,2)-TSP

Authors: Anna Adamaszek, Matthias Mnich, and Katarzyna Paluch


Abstract
We give faster and simpler approximation algorithms for the (1,2)-TSP problem, a well-studied variant of the traveling salesperson problem where all distances between cities are either 1 or 2. Our main results are two approximation algorithms for (1,2)-TSP, one with approximation factor 8/7 and run time O(n^3) and the other having an approximation guarantee of 7/6 and run time O(n^{2.5}). The 8/7-approximation matches the best known approximation factor for (1,2)-TSP, due to Berman and Karpinski (SODA 2006), but considerably improves the previous best run time of O(n^9). Thus, ours is the first improvement for the (1,2)-TSP problem in more than 10 years. The algorithm is based on combining three copies of a minimum-cost cycle cover of the input graph together with a relaxed version of a minimum weight matching, which allows using "half-edges". The resulting multigraph is then edge-colored with four colors so that each color class yields a collection of vertex-disjoint paths. The paths from one color class can then be extended to an 8/7-approximate traveling salesperson tour. Our algorithm, and in particular its analysis, is simpler than the previously best 8/7-approximation. The 7/6-approximation algorithm is similar and even simpler, and has the advantage of not using Hartvigsen's complicated algorithm for computing a minimum-cost triangle-free cycle cover.

Cite as

Anna Adamaszek, Matthias Mnich, and Katarzyna Paluch. New Approximation Algorithms for (1,2)-TSP. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{adamaszek_et_al:LIPIcs.ICALP.2018.9,
  author =	{Adamaszek, Anna and Mnich, Matthias and Paluch, Katarzyna},
  title =	{{New Approximation Algorithms for (1,2)-TSP}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.9},
  URN =		{urn:nbn:de:0030-drops-90133},
  doi =		{10.4230/LIPIcs.ICALP.2018.9},
  annote =	{Keywords: Approximation algorithms, traveling salesperson problem, cycle cover}
}
Document
Union of Hypercubes and 3D Minkowski Sums with Random Sizes

Authors: Pankaj K. Agarwal, Haim Kaplan, and Micha Sharir


Abstract
Let T={triangle_1,...,triangle_n} be a set of of n pairwise-disjoint triangles in R^3, and let B be a convex polytope in R^3 with a constant number of faces. For each i, let C_i = triangle_i oplus r_i B denote the Minkowski sum of triangle_i with a copy of B scaled by r_i>0. We show that if the scaling factors r_1, ..., r_n are chosen randomly then the expected complexity of the union of C_1, ..., C_n is O(n^{2+epsilon), for any epsilon > 0; the constant of proportionality depends on epsilon and the complexity of B. The worst-case bound can be Theta(n^3). We also consider a special case of this problem in which T is a set of points in R^3 and B is a unit cube in R^3, i.e., each C_i is a cube of side-length 2r_i. We show that if the scaling factors are chosen randomly then the expected complexity of the union of the cubes is O(n log^2 n), and it improves to O(n log n) if the scaling factors are chosen randomly from a "well-behaved" probability density function (pdf). We also extend the latter results to higher dimensions. For any fixed odd value of d, we show that the expected complexity of the union of the hypercubes is O(n^floor[d/2] log n) and the bound improves to O(n^floor[d/2]) if the scaling factors are chosen from a "well-behaved" pdf. The worst-case bounds are Theta(n^2) in R^3, and Theta(n^{ceil[d/2]}) in higher dimensions.

Cite as

Pankaj K. Agarwal, Haim Kaplan, and Micha Sharir. Union of Hypercubes and 3D Minkowski Sums with Random Sizes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ICALP.2018.10,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Sharir, Micha},
  title =	{{Union of Hypercubes and 3D Minkowski Sums with Random Sizes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.10},
  URN =		{urn:nbn:de:0030-drops-90147},
  doi =		{10.4230/LIPIcs.ICALP.2018.10},
  annote =	{Keywords: Computational geometry, Minkowski sums, Axis-parallel cubes, Union of geometric objects, Objects with random sizes}
}
Document
Noise-Tolerant Testing of High Entanglement of Formation

Authors: Rotem Arnon-Friedman and Henry Yuen


Abstract
In this work we construct tests that allow a classical user to certify high dimensional entanglement in uncharacterized and possibly noisy quantum devices. We present a family of non-local games {G_n} that for all n certify states with entanglement of formation Omega(n). These tests can be derived from any bipartite non-local game with a classical-quantum gap. Furthermore, our tests are noise-tolerant in the sense that fault tolerant technologies are not needed to play the games; entanglement distributed over noisy channels can pass with high probability, making our tests relevant for realistic experimental settings. This is in contrast to, e.g., results on self-testing of high dimensional entanglement, which are only relevant when the noise rate goes to zero with the system's size n. As a corollary of our result, we supply a lower-bound on the entanglement cost of any state achieving a quantum advantage in a bipartite non-local game. Our proof techniques heavily rely on ideas from the work on classical and quantum parallel repetition theorems.

Cite as

Rotem Arnon-Friedman and Henry Yuen. Noise-Tolerant Testing of High Entanglement of Formation. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 11:1-11:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{arnonfriedman_et_al:LIPIcs.ICALP.2018.11,
  author =	{Arnon-Friedman, Rotem and Yuen, Henry},
  title =	{{Noise-Tolerant Testing of High Entanglement of Formation}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{11:1--11:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.11},
  URN =		{urn:nbn:de:0030-drops-90157},
  doi =		{10.4230/LIPIcs.ICALP.2018.11},
  annote =	{Keywords: device independence, quantum games, entanglement testing, noise tolerance}
}
Document
A Complete Dichotomy for Complex-Valued Holant^c

Authors: Miriam Backens


Abstract
Holant problems are a family of counting problems on graphs, parametrised by sets of complex-valued functions of Boolean inputs. Holant^c denotes a subfamily of those problems, where any function set considered must contain the two unary functions pinning inputs to values 0 or 1. The complexity classification of Holant problems usually takes the form of dichotomy theorems, showing that for any set of functions in the family, the problem is either #P-hard or it can be solved in polynomial time. Previous such results include a dichotomy for real-valued Holant^c and one for Holant^c with complex symmetric functions, i.e. functions which only depend on the Hamming weight of the input. Here, we derive a dichotomy theorem for Holant^c with complex-valued, not necessarily symmetric functions. The tractable cases are the complex-valued generalisations of the tractable cases of the real-valued Holant^c dichotomy. The proof uses results from quantum information theory, particularly about entanglement. This full dichotomy for Holant^c answers a question that has been open for almost a decade.

Cite as

Miriam Backens. A Complete Dichotomy for Complex-Valued Holant^c. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 12:1-12:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{backens:LIPIcs.ICALP.2018.12,
  author =	{Backens, Miriam},
  title =	{{A Complete Dichotomy for Complex-Valued Holant^c}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{12:1--12:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.12},
  URN =		{urn:nbn:de:0030-drops-90168},
  doi =		{10.4230/LIPIcs.ICALP.2018.12},
  annote =	{Keywords: computational complexity, counting complexity, Holant problems, dichotomy, entanglement}
}
Document
Tight Bounds on Online Checkpointing Algorithms

Authors: Achiya Bar-On, Itai Dinur, Orr Dunkelman, Rani Hod, Nathan Keller, Eyal Ronen, and Adi Shamir


Abstract
The problem of online checkpointing is a classical problem with numerous applications which had been studied in various forms for almost 50 years. In the simplest version of this problem, a user has to maintain k memorized checkpoints during a long computation, where the only allowed operation is to move one of the checkpoints from its old time to the current time, and his goal is to keep the checkpoints as evenly spread out as possible at all times. At ICALP'13 Bringmann et al. studied this problem as a special case of an online/offline optimization problem in which the deviation from uniformity is measured by the natural discrepancy metric of the worst case ratio between real and ideal segment lengths. They showed this discrepancy is smaller than 1.59-o(1) for all k, and smaller than ln4-o(1)~~1.39 for the sparse subset of k's which are powers of 2. In addition, they obtained upper bounds on the achievable discrepancy for some small values of k. In this paper we solve the main problems left open in the ICALP'13 paper by proving that ln4 is a tight upper and lower bound on the asymptotic discrepancy for all large k, and by providing tight upper and lower bounds (in the form of provably optimal checkpointing algorithms, some of which are in fact better than those of Bringmann et al.) for all the small values of k <= 10.

Cite as

Achiya Bar-On, Itai Dinur, Orr Dunkelman, Rani Hod, Nathan Keller, Eyal Ronen, and Adi Shamir. Tight Bounds on Online Checkpointing Algorithms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{baron_et_al:LIPIcs.ICALP.2018.13,
  author =	{Bar-On, Achiya and Dinur, Itai and Dunkelman, Orr and Hod, Rani and Keller, Nathan and Ronen, Eyal and Shamir, Adi},
  title =	{{Tight Bounds on Online Checkpointing Algorithms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.13},
  URN =		{urn:nbn:de:0030-drops-90179},
  doi =		{10.4230/LIPIcs.ICALP.2018.13},
  annote =	{Keywords: checkpoint, checkpointing algorithm, online algorithm, uniform distribution, discrepancy}
}
Document
Fast Reed-Solomon Interactive Oracle Proofs of Proximity

Authors: Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev


Abstract
The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice. To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity. For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems. Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness.

Cite as

Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev. Fast Reed-Solomon Interactive Oracle Proofs of Proximity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ICALP.2018.14,
  author =	{Ben-Sasson, Eli and Bentov, Iddo and Horesh, Yinon and Riabzev, Michael},
  title =	{{Fast Reed-Solomon Interactive Oracle Proofs of Proximity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.14},
  URN =		{urn:nbn:de:0030-drops-90183},
  doi =		{10.4230/LIPIcs.ICALP.2018.14},
  annote =	{Keywords: Interactive proofs, low degree testing, Reed Solomon codes, proximity testing}
}
Document
NP-Hardness of Coloring 2-Colorable Hypergraph with Poly-Logarithmically Many Colors

Authors: Amey Bhangale


Abstract
We give very short and simple proofs of the following statements: Given a 2-colorable 4-uniform hypergraph on n vertices, 1) It is NP-hard to color it with log^delta n colors for some delta>0. 2) It is quasi-NP-hard to color it with O({log^{1-o(1)} n}) colors. In terms of NP-hardness, it improves the result of Guruswam, Håstad and Sudani [SIAM Journal on Computing, 2002], combined with Moshkovitz-Raz [Journal of the ACM, 2010], by an `exponential' factor. The second result improves the result of Saket [Conference on Computational Complexity (CCC), 2014] which shows quasi-NP-hardness of coloring a 2-colorable 4-uniform hypergraph with O(log^gamma n) colors for a sufficiently small constant 1 >> gamma>0. Our result is the first to show the NP-hardness of coloring a c-colorable k-uniform hypergraph with poly-logarithmically many colors, for any constants c >= 2 and k >= 3.

Cite as

Amey Bhangale. NP-Hardness of Coloring 2-Colorable Hypergraph with Poly-Logarithmically Many Colors. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 15:1-15:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhangale:LIPIcs.ICALP.2018.15,
  author =	{Bhangale, Amey},
  title =	{{NP-Hardness of Coloring 2-Colorable Hypergraph with Poly-Logarithmically Many Colors}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{15:1--15:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.15},
  URN =		{urn:nbn:de:0030-drops-90190},
  doi =		{10.4230/LIPIcs.ICALP.2018.15},
  annote =	{Keywords: Hypergraph coloring, Inapproximability, Schrijver graph}
}
Document
Sublinear Algorithms for MAXCUT and Correlation Clustering

Authors: Aditya Bhaskara, Samira Daruki, and Suresh Venkatasubramanian


Abstract
We study sublinear algorithms for two fundamental graph problems, MAXCUT and correlation clustering. Our focus is on constructing core-sets as well as developing streaming algorithms for these problems. Constant space algorithms are known for dense graphs for these problems, while Omega(n) lower bounds exist (in the streaming setting) for sparse graphs. Our goal in this paper is to bridge the gap between these extremes. Our first result is to construct core-sets of size O~(n^{1-delta}) for both the problems, on graphs with average degree n^{delta} (for any delta >0). This turns out to be optimal, under the exponential time hypothesis (ETH). Our core-set analysis is based on studying random-induced sub-problems of optimization problems. To the best of our knowledge, all the known results in our parameter range rely crucially on near-regularity assumptions. We avoid these by using a biased sampling approach, which we analyze using recent results on concentration of quadratic functions. We then show that our construction yields a 2-pass streaming (1+epsilon)-approximation for both problems; the algorithm uses O~(n^{1-delta}) space, for graphs of average degree n^delta.

Cite as

Aditya Bhaskara, Samira Daruki, and Suresh Venkatasubramanian. Sublinear Algorithms for MAXCUT and Correlation Clustering. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 16:1-16:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhaskara_et_al:LIPIcs.ICALP.2018.16,
  author =	{Bhaskara, Aditya and Daruki, Samira and Venkatasubramanian, Suresh},
  title =	{{Sublinear Algorithms for MAXCUT and Correlation Clustering}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{16:1--16:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.16},
  URN =		{urn:nbn:de:0030-drops-90203},
  doi =		{10.4230/LIPIcs.ICALP.2018.16},
  annote =	{Keywords: Sublinear algorithms, Streaming algorithms, Core-sets, Maximum cut, Correlation clustering}
}
Document
Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH

Authors: Arnab Bhattacharyya, Suprovat Ghoshal, Karthik C. S., and Pasin Manurangsi


Abstract
The k-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over F_2, which can be stated as follows: given a generator matrix A and an integer k, determine whether the code generated by A has distance at most k. Here, k is the parameter of the problem. The question of whether k-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows' book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity. In this work, we show that k-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur'16, Manurangsi-Raghavendra'16]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.'17]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only 0.99-satisfiable (where the parameter is the number of variables). We also consider the parameterized k-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer k, and the goal is to determine whether the norm of the shortest vector (in the l_p norm for some fixed p) is at most k. Similar to k-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any p > 1, k-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of p = 2, the inapproximability factor can be amplified to any constant.

Cite as

Arnab Bhattacharyya, Suprovat Ghoshal, Karthik C. S., and Pasin Manurangsi. Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ICALP.2018.17,
  author =	{Bhattacharyya, Arnab and Ghoshal, Suprovat and C. S., Karthik and Manurangsi, Pasin},
  title =	{{Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.17},
  URN =		{urn:nbn:de:0030-drops-90214},
  doi =		{10.4230/LIPIcs.ICALP.2018.17},
  annote =	{Keywords: Parameterized Complexity, Inapproximability, Even Set, Minimum Distance Problem, Shortest Vector Problem, Gap-ETH}
}
Document
Rollercoasters and Caterpillars

Authors: Therese Biedl, Ahmad Biniaz, Robert Cummings, Anna Lubiw, Florin Manea, Dirk Nowotka, and Jeffrey Shallit


Abstract
A rollercoaster is a sequence of real numbers for which every maximal contiguous subsequence - increasing or decreasing - has length at least three. By translating this sequence to a set of points in the plane, a rollercoaster can be defined as an x-monotone polygonal path for which every maximal sub-path, with positive- or negative-slope edges, has at least three vertices. Given a sequence of distinct real numbers, the rollercoaster problem asks for a maximum-length (not necessarily contiguous) subsequence that is a rollercoaster. It was conjectured that every sequence of n distinct real numbers contains a rollercoaster of length at least ceil[n/2] for n>7, while the best known lower bound is Omega(n/log n). In this paper we prove this conjecture. Our proof is constructive and implies a linear-time algorithm for computing a rollercoaster of this length. Extending the O(n log n)-time algorithm for computing a longest increasing subsequence, we show how to compute a maximum-length rollercoaster within the same time bound. A maximum-length rollercoaster in a permutation of {1,...,n} can be computed in O(n log log n) time. The search for rollercoasters was motivated by orthogeodesic point-set embedding of caterpillars. A caterpillar is a tree such that deleting the leaves gives a path, called the spine. A top-view caterpillar is one of maximum degree 4 such that the two leaves adjacent to each vertex lie on opposite sides of the spine. As an application of our result on rollercoasters, we are able to find a planar drawing of every n-vertex top-view caterpillar on every set of 25/3(n+4) points in the plane, such that each edge is an orthogonal path with one bend. This improves the previous best known upper bound on the number of required points, which is O(n log n). We also show that such a drawing can be obtained in linear time, when the points are given in sorted order.

Cite as

Therese Biedl, Ahmad Biniaz, Robert Cummings, Anna Lubiw, Florin Manea, Dirk Nowotka, and Jeffrey Shallit. Rollercoasters and Caterpillars. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{biedl_et_al:LIPIcs.ICALP.2018.18,
  author =	{Biedl, Therese and Biniaz, Ahmad and Cummings, Robert and Lubiw, Anna and Manea, Florin and Nowotka, Dirk and Shallit, Jeffrey},
  title =	{{Rollercoasters and Caterpillars}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.18},
  URN =		{urn:nbn:de:0030-drops-90224},
  doi =		{10.4230/LIPIcs.ICALP.2018.18},
  annote =	{Keywords: sequences, alternating runs, patterns in permutations, caterpillars}
}
Document
New algorithms for Steiner tree reoptimization

Authors: Davide Bilò


Abstract
Reoptimization is a setting in which we are given an (near) optimal solution of a problem instance and a local modification that slightly changes the instance. The main goal is that of finding an (near) optimal solution of the modified instance. We investigate one of the most studied scenarios in reoptimization known as Steiner tree reoptimization. Steiner tree reoptimization is a collection of strongly NP-hard optimization problems that are defined on top of the classical Steiner tree problem and for which several constant-factor approximation algorithms have been designed in the last decade. In this paper we improve upon all these results by developing a novel technique that allows us to design polynomial-time approximation schemes. Remarkably, prior to this paper, no approximation algorithm better than recomputing a solution from scratch was known for the elusive scenario in which the cost of a single edge decreases. Our results are best possible since none of the problems addressed in this paper admits a fully polynomial-time approximation scheme, unless P=NP.

Cite as

Davide Bilò. New algorithms for Steiner tree reoptimization. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bilo:LIPIcs.ICALP.2018.19,
  author =	{Bil\`{o}, Davide},
  title =	{{New algorithms for Steiner tree reoptimization}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.19},
  URN =		{urn:nbn:de:0030-drops-90234},
  doi =		{10.4230/LIPIcs.ICALP.2018.19},
  annote =	{Keywords: Steiner tree problem, reoptimization, approximation algorithms}
}
Document
Efficient Shortest Paths in Scale-Free Networks with Underlying Hyperbolic Geometry

Authors: Thomas Bläsius, Cedric Freiberger, Tobias Friedrich, Maximilian Katzmann, Felix Montenegro-Retana, and Marianne Thieffry


Abstract
A common way to accelerate shortest path algorithms on graphs is the use of a bidirectional search, which simultaneously explores the graph from the start and the destination. It has been observed recently that this strategy performs particularly well on scale-free real-world networks. Such networks typically have a heterogeneous degree distribution (e.g., a power-law distribution) and high clustering (i.e., vertices with a common neighbor are likely to be connected themselves). These two properties can be obtained by assuming an underlying hyperbolic geometry. To explain the observed behavior of the bidirectional search, we analyze its running time on hyperbolic random graphs and prove that it is {O~}(n^{2 - 1/alpha} + n^{1/(2 alpha)} + delta_{max}) with high probability, where alpha in (0.5, 1) controls the power-law exponent of the degree distribution, and delta_{max} is the maximum degree. This bound is sublinear, improving the obvious worst-case linear bound. Although our analysis depends on the underlying geometry, the algorithm itself is oblivious to it.

Cite as

Thomas Bläsius, Cedric Freiberger, Tobias Friedrich, Maximilian Katzmann, Felix Montenegro-Retana, and Marianne Thieffry. Efficient Shortest Paths in Scale-Free Networks with Underlying Hyperbolic Geometry. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ICALP.2018.20,
  author =	{Bl\"{a}sius, Thomas and Freiberger, Cedric and Friedrich, Tobias and Katzmann, Maximilian and Montenegro-Retana, Felix and Thieffry, Marianne},
  title =	{{Efficient Shortest Paths in Scale-Free Networks with Underlying Hyperbolic Geometry}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.20},
  URN =		{urn:nbn:de:0030-drops-90246},
  doi =		{10.4230/LIPIcs.ICALP.2018.20},
  annote =	{Keywords: random graphs, hyperbolic geometry, scale-free networks, bidirectional shortest path}
}
Document
Approximate Convex Hull of Data Streams

Authors: Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang


Abstract
Given a finite set of points P subseteq R^d, we would like to find a small subset S subseteq P such that the convex hull of S approximately contains P. More formally, every point in P is within distance epsilon from the convex hull of S. Such a subset S is called an epsilon-hull. Computing an epsilon-hull is an important problem in computational geometry, machine learning, and approximation algorithms. In many applications, the set P is too large to fit in memory. We consider the streaming model where the algorithm receives the points of P sequentially and strives to use a minimal amount of memory. Existing streaming algorithms for computing an epsilon-hull require O(epsilon^{(1-d)/2}) space, which is optimal for a worst-case input. However, this ignores the structure of the data. The minimal size of an epsilon-hull of P, which we denote by OPT, can be much smaller. A natural question is whether a streaming algorithm can compute an epsilon-hull using only O(OPT) space. We begin with lower bounds that show, under a reasonable streaming model, that it is not possible to have a single-pass streaming algorithm that computes an epsilon-hull with O(OPT) space. We instead propose three relaxations of the problem for which we can compute epsilon-hulls using space near-linear to the optimal size. Our first algorithm for points in R^2 that arrive in random-order uses O(log n * OPT) space. Our second algorithm for points in R^2 makes O(log(epsilon^{-1})) passes before outputting the epsilon-hull and requires O(OPT) space. Our third algorithm, for points in R^d for any fixed dimension d, outputs, with high probability, an epsilon-hull for all but delta-fraction of directions and requires O(OPT * log OPT) space.

Cite as

Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang. Approximate Convex Hull of Data Streams. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 21:1-21:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ICALP.2018.21,
  author =	{Blum, Avrim and Braverman, Vladimir and Kumar, Ananya and Lang, Harry and Yang, Lin F.},
  title =	{{Approximate Convex Hull of Data Streams}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{21:1--21:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.21},
  URN =		{urn:nbn:de:0030-drops-90254},
  doi =		{10.4230/LIPIcs.ICALP.2018.21},
  annote =	{Keywords: Convex Hulls, Streaming Algorithms, Epsilon Kernels, Sparse Coding}
}
Document
Small Bias Requires Large Formulas

Authors: Andrej Bogdanov


Abstract
A small-biased function is a randomized function whose distribution of truth-tables is small-biased. We demonstrate that known explicit lower bounds on (1) the size of general Boolean formulas, (2) the size of De Morgan formulas, and (3) correlation against small De Morgan formulas apply to small-biased functions. As a consequence, any strongly explicit small-biased generator is subject to the best-known explicit formula lower bounds in all these models. On the other hand, we give a construction of a small-biased function that is tight with respect to lower bound (1) for the relevant range of parameters. We interpret this construction as a natural-type barrier against substantially stronger lower bounds for general formulas.

Cite as

Andrej Bogdanov. Small Bias Requires Large Formulas. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 22:1-22:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bogdanov:LIPIcs.ICALP.2018.22,
  author =	{Bogdanov, Andrej},
  title =	{{Small Bias Requires Large Formulas}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{22:1--22:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.22},
  URN =		{urn:nbn:de:0030-drops-90264},
  doi =		{10.4230/LIPIcs.ICALP.2018.22},
  annote =	{Keywords: formula lower bounds, natural proofs, pseudorandomness}
}
Document
Geodesic Obstacle Representation of Graphs

Authors: Prosenjit Bose, Paz Carmi, Vida Dujmovic, Saeed Mehrabi, Fabrizio Montecchiani, Pat Morin, and Luis Fernando Schultz Xavier da Silveira


Abstract
An obstacle representation of a graph is a mapping of the vertices onto points in the plane and a set of connected regions of the plane (called obstacles) such that the straight-line segment connecting the points corresponding to two vertices does not intersect any obstacles if and only if the vertices are adjacent in the graph. The obstacle representation and its plane variant (in which the resulting representation is a plane straight-line embedding of the graph) have been extensively studied with the main objective of minimizing the number of obstacles. Recently, Biedl and Mehrabi [Therese C. Biedl and Saeed Mehrabi, 2017] studied non-blocking grid obstacle representations of graphs in which the vertices of the graph are mapped onto points in the plane while the straight-line segments representing the adjacency between the vertices is replaced by the L_1 (Manhattan) shortest paths in the plane that avoid obstacles. In this paper, we introduce the notion of geodesic obstacle representations of graphs with the main goal of providing a generalized model, which comes naturally when viewing line segments as shortest paths in the Euclidean plane. To this end, we extend the definition of obstacle representation by allowing some obstacles-avoiding shortest path between the corresponding points in the underlying metric space whenever the vertices are adjacent in the graph. We consider both general and plane variants of geodesic obstacle representations (in a similar sense to obstacle representations) under any polyhedral distance function in R^d as well as shortest path distances in graphs. Our results generalize and unify the notions of obstacle representations, plane obstacle representations and grid obstacle representations, leading to a number of questions on such representations.

Cite as

Prosenjit Bose, Paz Carmi, Vida Dujmovic, Saeed Mehrabi, Fabrizio Montecchiani, Pat Morin, and Luis Fernando Schultz Xavier da Silveira. Geodesic Obstacle Representation of Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 23:1-23:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bose_et_al:LIPIcs.ICALP.2018.23,
  author =	{Bose, Prosenjit and Carmi, Paz and Dujmovic, Vida and Mehrabi, Saeed and Montecchiani, Fabrizio and Morin, Pat and Silveira, Luis Fernando Schultz Xavier da},
  title =	{{Geodesic Obstacle Representation of Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{23:1--23:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.23},
  URN =		{urn:nbn:de:0030-drops-90274},
  doi =		{10.4230/LIPIcs.ICALP.2018.23},
  annote =	{Keywords: Obstacle representation, Grid obstacle representation, Geodesic obstacle representation}
}
Document
The Bottleneck Complexity of Secure Multiparty Computation

Authors: Elette Boyle, Abhishek Jain, Manoj Prabhakaran, and Ching-Hua Yu


Abstract
In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution. We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required. Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be transformed into an MPC protocol for f with the same bottleneck complexity. Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest.

Cite as

Elette Boyle, Abhishek Jain, Manoj Prabhakaran, and Ching-Hua Yu. The Bottleneck Complexity of Secure Multiparty Computation. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boyle_et_al:LIPIcs.ICALP.2018.24,
  author =	{Boyle, Elette and Jain, Abhishek and Prabhakaran, Manoj and Yu, Ching-Hua},
  title =	{{The Bottleneck Complexity of Secure Multiparty Computation}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.24},
  URN =		{urn:nbn:de:0030-drops-90288},
  doi =		{10.4230/LIPIcs.ICALP.2018.24},
  annote =	{Keywords: distributed protocols, secure computation, communication complexity}
}
Document
Revisiting Frequency Moment Estimation in Random Order Streams

Authors: Vladimir Braverman, Emanuele Viola, David P. Woodruff, and Lin F. Yang


Abstract
We revisit one of the classic problems in the data stream literature, namely, that of estimating the frequency moments F_p for 0 < p < 2 of an underlying n-dimensional vector presented as a sequence of additive updates in a stream. It is well-known that using p-stable distributions one can approximate any of these moments up to a multiplicative (1+epsilon)-factor using O(epsilon^{-2} log n) bits of space, and this space bound is optimal up to a constant factor in the turnstile streaming model. We show that surprisingly, if one instead considers the popular random-order model of insertion-only streams, in which the updates to the underlying vector arrive in a random order, then one can beat this space bound and achieve O~(epsilon^{-2} + log n) bits of space, where the O~ hides poly(log(1/epsilon) + log log n) factors. If epsilon^{-2} ~~ log n, this represents a roughly quadratic improvement in the space achievable in turnstile streams. Our algorithm is in fact deterministic, and we show our space bound is optimal up to poly(log(1/epsilon) + log log n) factors for deterministic algorithms in the random order model. We also obtain a similar improvement in space for p = 2 whenever F_2 >~ log n * F_1.

Cite as

Vladimir Braverman, Emanuele Viola, David P. Woodruff, and Lin F. Yang. Revisiting Frequency Moment Estimation in Random Order Streams. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ICALP.2018.25,
  author =	{Braverman, Vladimir and Viola, Emanuele and Woodruff, David P. and Yang, Lin F.},
  title =	{{Revisiting Frequency Moment Estimation in Random Order Streams}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.25},
  URN =		{urn:nbn:de:0030-drops-90294},
  doi =		{10.4230/LIPIcs.ICALP.2018.25},
  annote =	{Keywords: Data Stream, Frequency Moments, Random Order, Space Complexity, Insertion Only Stream}
}
Document
Proportional Approval Voting, Harmonic k-median, and Negative Association

Authors: Jaroslaw Byrka, Piotr Skowron, and Krzysztof Sornat


Abstract
We study a generic framework that provides a unified view on two important classes of problems: (i) extensions of the k-median problem where clients are interested in having multiple facilities in their vicinity (e.g., due to the fact that, with some small probability, the closest facility might be malfunctioning and so might not be available for using), and (ii) finding winners according to some appealing multiwinner election rules, i.e., election system aimed for choosing representatives bodies, such as parliaments, based on preferences of a population of voters over individual candidates. Each problem in our framework is associated with a vector of weights: we show that the approximability of the problem depends on structural properties of these vectors. We specifically focus on the harmonic sequence of weights, since it results in particularly appealing properties of the considered problem. In particular, the objective function interpreted in a multiwinner election setup reflects to the well-known Proportional Approval Voting (PAV) rule. Our main result is that, due to the specific (harmonic) structure of weights, the problem allows constant factor approximation. This is surprising since the problem can be interpreted as a variant of the k-median problem where we do not assume that the connection costs satisfy the triangle inequality. To the best of our knowledge this is the first constant factor approximation algorithm for a variant of k-median that does not require this assumption. The algorithm we propose is based on dependent rounding [Srinivasan, FOCS'01] applied to the solution of a natural LP-relaxation of the problem. The rounding process is well known to produce distributions over integral solutions satisfying Negative Correlation (NC), which is usually sufficient for the analysis of approximation guarantees offered by rounding procedures. In our analysis, however, we need to use the fact that the carefully implemented rounding process satisfies a stronger property, called Negative Association (NA), which allows us to apply standard concentration bounds for conditional random variables.

Cite as

Jaroslaw Byrka, Piotr Skowron, and Krzysztof Sornat. Proportional Approval Voting, Harmonic k-median, and Negative Association. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{byrka_et_al:LIPIcs.ICALP.2018.26,
  author =	{Byrka, Jaroslaw and Skowron, Piotr and Sornat, Krzysztof},
  title =	{{Proportional Approval Voting, Harmonic k-median, and Negative Association}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.26},
  URN =		{urn:nbn:de:0030-drops-90307},
  doi =		{10.4230/LIPIcs.ICALP.2018.26},
  annote =	{Keywords: approximation algorithms, computational social choice, k-median, dependent rounding, negative association}
}
Document
Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity

Authors: Marco L. Carmosino, Russell Impagliazzo, and Manuel Sabin


Abstract
We show that popular hardness conjectures about problems from the field of fine-grained complexity theory imply structural results for resource-based complexity classes. Namely, we show that if either k-Orthogonal Vectors or k-CLIQUE requires n^{epsilon k} time, for some constant epsilon>1/2, to count (note that these conjectures are significantly weaker than the usual ones made on these problems) on randomized machines for all but finitely many input lengths, then we have the following derandomizations: - BPP can be decided in polynomial time using only n^alpha random bits on average over any efficient input distribution, for any constant alpha>0 - BPP can be decided in polynomial time with no randomness on average over the uniform distribution This answers an open question of Ball et al. (STOC '17) in the positive of whether derandomization can be achieved from conjectures from fine-grained complexity theory. More strongly, these derandomizations improve over all previous ones achieved from worst-case uniform assumptions by succeeding on all but finitely many input lengths. Previously, derandomizations from worst-case uniform assumptions were only know to succeed on infinitely many input lengths. It is specifically the structure and moderate hardness of the k-Orthogonal Vectors and k-CLIQUE problems that makes removing this restriction possible. Via this uniform derandomization, we connect the problem-centric and resource-centric views of complexity theory by showing that exact hardness assumptions about specific problems like k-CLIQUE imply quantitative and qualitative relationships between randomized and deterministic time. This can be either viewed as a barrier to proving some of the main conjectures of fine-grained complexity theory lest we achieve a major breakthrough in unconditional derandomization or, optimistically, as route to attain such derandomizations by working on very concrete and weak conjectures about specific problems.

Cite as

Marco L. Carmosino, Russell Impagliazzo, and Manuel Sabin. Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 27:1-27:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{carmosino_et_al:LIPIcs.ICALP.2018.27,
  author =	{Carmosino, Marco L. and Impagliazzo, Russell and Sabin, Manuel},
  title =	{{Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{27:1--27:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.27},
  URN =		{urn:nbn:de:0030-drops-90316},
  doi =		{10.4230/LIPIcs.ICALP.2018.27},
  annote =	{Keywords: Derandomization, Hardness vs Randomness, Fine-Grained Complexity, Average-Case Complexity, k-Orthogonal Vectors, k-CLIQUE}
}
Document
Ranking with Fairness Constraints

Authors: L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi


Abstract
Ranking algorithms are deployed widely to order a set of items in applications such as search engines, news feeds, and recommendation systems. Recent studies, however, have shown that, left unchecked, the output of ranking algorithms can result in decreased diversity in the type of content presented, promote stereotypes, and polarize opinions. In order to address such issues, we study the following variant of the traditional ranking problem when, in addition, there are fairness or diversity constraints. Given a collection of items along with 1) the value of placing an item in a particular position in the ranking, 2) the collection of sensitive attributes (such as gender, race, political opinion) of each item and 3) a collection of fairness constraints that, for each k, bound the number of items with each attribute that are allowed to appear in the top k positions of the ranking, the goal is to output a ranking that maximizes the value with respect to the original rank quality metric while respecting the constraints. This problem encapsulates various well-studied problems related to bipartite and hypergraph matching as special cases and turns out to be hard to approximate even with simple constraints. Our main technical contributions are fast exact and approximation algorithms along with complementary hardness results that, together, come close to settling the approximability of this constrained ranking maximization problem. Unlike prior work on the approximability of constrained matching problems, our algorithm runs in linear time, even when the number of constraints is (polynomially) large, its approximation ratio does not depend on the number of constraints, and it produces solutions with small constraint violations. Our results rely on insights about the constrained matching problem when the objective function satisfies certain properties that appear in common ranking metrics such as discounted cumulative gain (DCG), Spearman's rho or Bradley-Terry, along with the nested structure of fairness constraints.

Cite as

L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. Ranking with Fairness Constraints. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 28:1-28:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{celis_et_al:LIPIcs.ICALP.2018.28,
  author =	{Celis, L. Elisa and Straszak, Damian and Vishnoi, Nisheeth K.},
  title =	{{Ranking with Fairness Constraints}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{28:1--28:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.28},
  URN =		{urn:nbn:de:0030-drops-90329},
  doi =		{10.4230/LIPIcs.ICALP.2018.28},
  annote =	{Keywords: Ranking, Fairness, Optimization, Matching, Approximation Algorithms}
}
Document
Interpolating between k-Median and k-Center: Approximation Algorithms for Ordered k-Median

Authors: Deeparnab Chakrabarty and Chaitanya Swamy


Abstract
We consider a generalization of k-median and k-center, called the ordered k-median problem. In this problem, we are given a metric space (D,{c_{ij}}) with n=|D| points, and a non-increasing weight vector w in R_+^n, and the goal is to open k centers and assign each point j in D to a center so as to minimize w_1 *(largest assignment cost)+w_2 *(second-largest assignment cost)+...+w_n *(n-th largest assignment cost). We give an (18+epsilon)-approximation algorithm for this problem. Our algorithms utilize Lagrangian relaxation and the primal-dual schema, combined with an enumeration procedure of Aouad and Segev. For the special case of {0,1}-weights, which models the problem of minimizing the l largest assignment costs that is interesting in and of by itself, we provide a novel reduction to the (standard) k-median problem, showing that LP-relative guarantees for k-median translate to guarantees for the ordered k-median problem; this yields a nice and clean (8.5+epsilon)-approximation algorithm for {0,1} weights.

Cite as

Deeparnab Chakrabarty and Chaitanya Swamy. Interpolating between k-Median and k-Center: Approximation Algorithms for Ordered k-Median. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 29:1-29:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ICALP.2018.29,
  author =	{Chakrabarty, Deeparnab and Swamy, Chaitanya},
  title =	{{Interpolating between k-Median and k-Center: Approximation Algorithms for Ordered k-Median}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{29:1--29:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.29},
  URN =		{urn:nbn:de:0030-drops-90335},
  doi =		{10.4230/LIPIcs.ICALP.2018.29},
  annote =	{Keywords: Approximation algorithms, Clustering, Facility location, Primal-dual method}
}
Document
Generalized Center Problems with Outliers

Authors: Deeparnab Chakrabarty and Maryam Negahbani


Abstract
We study the F-center problem with outliers: given a metric space (X,d), a general down-closed family F of subsets of X, and a parameter m, we need to locate a subset S in F of centers such that the maximum distance among the closest m points in X to S is minimized. Our main result is a dichotomy theorem. Colloquially, we prove that there is an efficient 3-approximation for the F-center problem with outliers if and only if we can efficiently optimize a poly-bounded linear function over F subject to a partition constraint. One concrete upshot of our result is a polynomial time 3-approximation for the knapsack center problem with outliers for which no (true) approximation algorithm was known.

Cite as

Deeparnab Chakrabarty and Maryam Negahbani. Generalized Center Problems with Outliers. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ICALP.2018.30,
  author =	{Chakrabarty, Deeparnab and Negahbani, Maryam},
  title =	{{Generalized Center Problems with Outliers}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.30},
  URN =		{urn:nbn:de:0030-drops-90345},
  doi =		{10.4230/LIPIcs.ICALP.2018.30},
  annote =	{Keywords: Approximation Algorithms, Clustering, k-Center Problem}
}
Document
Orthogonal Point Location and Rectangle Stabbing Queries in 3-d

Authors: Timothy M. Chan, Yakov Nekrich, Saladi Rahul, and Konstantinos Tsakalidis


Abstract
In this work, we present a collection of new results on two fundamental problems in geometric data structures: orthogonal point location and rectangle stabbing. - Orthogonal point location. We give the first linear-space data structure that supports 3-d point location queries on n disjoint axis-aligned boxes with optimal O(log n) query time in the (arithmetic) pointer machine model. This improves the previous O(log^{3/2} n) bound of Rahul [SODA 2015]. We similarly obtain the first linear-space data structure in the I/O model with optimal query cost, and also the first linear-space data structure in the word RAM model with sub-logarithmic query time. - Rectangle stabbing. We give the first linear-space data structure that supports 3-d 4-sided and 5-sided rectangle stabbing queries in optimal O(log_wn+k) time in the word RAM model. We similarly obtain the first optimal data structure for the closely related problem of 2-d top-k rectangle stabbing in the word RAM model, and also improved results for 3-d 6-sided rectangle stabbing. For point location, our solution is simpler than previous methods, and is based on an interesting variant of the van Emde Boas recursion, applied in a round-robin fashion over the dimensions, combined with bit-packing techniques. For rectangle stabbing, our solution is a variant of Alstrup, Brodal, and Rauhe's grid-based recursive technique (FOCS 2000), combined with a number of new ideas.

Cite as

Timothy M. Chan, Yakov Nekrich, Saladi Rahul, and Konstantinos Tsakalidis. Orthogonal Point Location and Rectangle Stabbing Queries in 3-d. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ICALP.2018.31,
  author =	{Chan, Timothy M. and Nekrich, Yakov and Rahul, Saladi and Tsakalidis, Konstantinos},
  title =	{{Orthogonal Point Location and Rectangle Stabbing Queries in 3-d}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.31},
  URN =		{urn:nbn:de:0030-drops-90352},
  doi =		{10.4230/LIPIcs.ICALP.2018.31},
  annote =	{Keywords: geometric data structures, orthogonal point location, rectangle stabbing, pointer machines, I/O model, word RAM model}
}
Document
Spanning Tree Congestion and Computation of Generalized Györi-Lovász Partition

Authors: L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac


Abstract
We study a natural problem in graph sparsification, the Spanning Tree Congestion (STC) problem. Informally, it seeks a spanning tree with no tree-edge routing too many of the original edges. For any general connected graph with n vertices and m edges, we show that its STC is at most O(sqrt{mn}), which is asymptotically optimal since we also demonstrate graphs with STC at least Omega(sqrt{mn}). We present a polynomial-time algorithm which computes a spanning tree with congestion O(sqrt{mn}* log n). We also present another algorithm for computing a spanning tree with congestion O(sqrt{mn}); this algorithm runs in sub-exponential time when m = omega(n log^2 n). For achieving the above results, an important intermediate theorem is generalized Györi-Lovász theorem. Chen et al. [Jiangzhuo Chen et al., 2007] gave a non-constructive proof. We give the first elementary and constructive proof with a local search algorithm of running time O^*(4^n). We discuss some consequences of the theorem concerning graph partitioning, which might be of independent interest. We also show that for any graph which satisfies certain expanding properties, its STC is at most O(n), and a corresponding spanning tree can be computed in polynomial time. We then use this to show that a random graph has STC Theta(n) with high probability.

Cite as

L. Sunil Chandran, Yun Kuen Cheung, and Davis Issac. Spanning Tree Congestion and Computation of Generalized Györi-Lovász Partition. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chandran_et_al:LIPIcs.ICALP.2018.32,
  author =	{Chandran, L. Sunil and Cheung, Yun Kuen and Issac, Davis},
  title =	{{Spanning Tree Congestion and Computation of Generalized Gy\"{o}ri-Lov\'{a}sz Partition}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.32},
  URN =		{urn:nbn:de:0030-drops-90361},
  doi =		{10.4230/LIPIcs.ICALP.2018.32},
  annote =	{Keywords: Spanning Tree Congestion, Graph Sparsification, Graph Partitioning, Min-Max Graph Partitioning, k-Vertex-Connected Graphs, Gy\"{o}ri-Lov\'{a}sz Theorem}
}
Document
Fully Dynamic Almost-Maximal Matching: Breaking the Polynomial Worst-Case Time Barrier

Authors: Moses Charikar and Shay Solomon


Abstract
The state-of-the-art algorithm for maintaining an approximate maximum matching in fully dynamic graphs has a polynomial worst-case update time, even for poor approximation guarantees. Bhattacharya, Henzinger and Nanongkai showed how to maintain a constant approximation to the minimum vertex cover, and thus also a constant-factor estimate of the maximum matching size, with polylogarithmic worst-case update time. Later (in SODA'17 Proc.) they improved the approximation to 2+epsilon. Nevertheless, the fundamental problem of maintaining an approximate matching with sub-polynomial worst-case time bounds remained open. We present a randomized algorithm for maintaining an almost-maximal matching in fully dynamic graphs with polylogarithmic worst-case update time. Such a matching provides (2+epsilon)-approximations for both maximum matching and minimum vertex cover, for any epsilon > 0. The worst-case update time of our algorithm, O(poly(log n,epsilon^{-1})), holds deterministically, while the almost-maximality guarantee holds with high probability. Our result was done independently of the (2+epsilon)-approximation result of Bhattacharya et al., thus settling the aforementioned problem on dynamic matchings and providing essentially the best possible approximation guarantee for dynamic vertex cover (assuming the unique games conjecture). To prove this result, we exploit a connection between the standard oblivious adversarial model, which can be viewed as inherently "online", and an "offline" model where some (limited) information on the future can be revealed efficiently upon demand. Our randomized algorithm is derived from a deterministic algorithm in this offline model. This approach gives an elegant way to analyze randomized dynamic algorithms, and is of independent interest.

Cite as

Moses Charikar and Shay Solomon. Fully Dynamic Almost-Maximal Matching: Breaking the Polynomial Worst-Case Time Barrier. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{charikar_et_al:LIPIcs.ICALP.2018.33,
  author =	{Charikar, Moses and Solomon, Shay},
  title =	{{Fully Dynamic Almost-Maximal Matching: Breaking the Polynomial Worst-Case Time Barrier}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.33},
  URN =		{urn:nbn:de:0030-drops-90370},
  doi =		{10.4230/LIPIcs.ICALP.2018.33},
  annote =	{Keywords: dynamic graph algorithms, maximum matching, worst-case bounds}
}
Document
On Estimating Edit Distance: Alignment, Dimension Reduction, and Embeddings

Authors: Moses Charikar, Ofir Geri, Michael P. Kim, and William Kuszmaul


Abstract
Edit distance is a fundamental measure of distance between strings and has been widely studied in computer science. While the problem of estimating edit distance has been studied extensively, the equally important question of actually producing an alignment (i.e., the sequence of edits) has received far less attention. Somewhat surprisingly, we show that any algorithm to estimate edit distance can be used in a black-box fashion to produce an approximate alignment of strings, with modest loss in approximation factor and small loss in run time. Plugging in the result of Andoni, Krauthgamer, and Onak, we obtain an alignment that is a (log n)^{O(1/epsilon^2)} approximation in time O~(n^{1 + epsilon}). Closely related to the study of approximation algorithms is the study of metric embeddings for edit distance. We show that min-hash techniques can be useful in designing edit distance embeddings through three results: (1) An embedding from Ulam distance (edit distance over permutations) to Hamming space that matches the best known distortion of O(log n) and also implicitly encodes a sequence of edits between the strings; (2) In the case where the edit distance between the input strings is known to have an upper bound K, we show that embeddings of edit distance into Hamming space with distortion f(n) can be modified in a black-box fashion to give distortion O(f(poly(K))) for a class of periodic-free strings; (3) A randomized dimension-reduction map with contraction c and asymptotically optimal expected distortion O(c), improving on the previous O~(c^{1 + 2 / log log log n}) distortion result of Batu, Ergun, and Sahinalp.

Cite as

Moses Charikar, Ofir Geri, Michael P. Kim, and William Kuszmaul. On Estimating Edit Distance: Alignment, Dimension Reduction, and Embeddings. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{charikar_et_al:LIPIcs.ICALP.2018.34,
  author =	{Charikar, Moses and Geri, Ofir and Kim, Michael P. and Kuszmaul, William},
  title =	{{On Estimating Edit Distance: Alignment, Dimension Reduction, and Embeddings}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.34},
  URN =		{urn:nbn:de:0030-drops-90383},
  doi =		{10.4230/LIPIcs.ICALP.2018.34},
  annote =	{Keywords: edit distance, alignment, approximation algorithms, embedding, dimension reduction}
}
Document
How Hard Is It to Satisfy (Almost) All Roommates?

Authors: Jiehua Chen, Danny Hermelin, Manuel Sorge, and Harel Yedidsion


Abstract
The classic Stable Roommates problem (the non-bipartite generalization of the well-known Stable Marriage problem) asks whether there is a stable matching for a given set of agents, i.e. a partitioning of the agents into disjoint pairs such that no two agents induce a blocking pair. Herein, each agent has a preference list denoting who it prefers to have as a partner, and two agents are blocking if they prefer to be with each other rather than with their assigned partners. Since stable matchings may not be unique, we study an NP-hard optimization variant of Stable Roommates, called Egal Stable Roommates, which seeks to find a stable matching with a minimum egalitarian cost gamma, i.e. the sum of the dissatisfaction of the agents is minimum. The dissatisfaction of an agent is the number of agents that this agent prefers over its partner if it is matched; otherwise it is the length of its preference list. We also study almost stable matchings, called Min-Block-Pair Stable Roommates, which seeks to find a matching with a minimum number beta of blocking pairs. Our main result is that Egal Stable Roommates parameterized by gamma is fixed-parameter tractable, while Min-Block-Pair Stable Roommates parameterized by beta is W[1]-hard, even if the length of each preference list is at most five.

Cite as

Jiehua Chen, Danny Hermelin, Manuel Sorge, and Harel Yedidsion. How Hard Is It to Satisfy (Almost) All Roommates?. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2018.35,
  author =	{Chen, Jiehua and Hermelin, Danny and Sorge, Manuel and Yedidsion, Harel},
  title =	{{How Hard Is It to Satisfy (Almost) All Roommates?}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{35:1--35:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.35},
  URN =		{urn:nbn:de:0030-drops-90398},
  doi =		{10.4230/LIPIcs.ICALP.2018.35},
  annote =	{Keywords: NP-hard problems Data reduction rules Kernelizations Parameterized complexity analysis and algorithmics}
}
Document
A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas

Authors: Suryajith Chillara, Nutan Limaye, and Srikanth Srinivasan


Abstract
We show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. Formally, for any s = s(n) = n^{O(1)} and any delta>0, we construct explicit families of multilinear polynomials P_n in F[x_1,...,x_n] that have multilinear formulas of size s and depth three but no multilinear formulas of size s^{1/2-delta} and depth o(log n/log log n). As far as we know, this is the first such result for an algebraic model of computation. Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using epsilon-biased spaces.

Cite as

Suryajith Chillara, Nutan Limaye, and Srikanth Srinivasan. A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 36:1-36:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chillara_et_al:LIPIcs.ICALP.2018.36,
  author =	{Chillara, Suryajith and Limaye, Nutan and Srinivasan, Srikanth},
  title =	{{A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{36:1--36:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.36},
  URN =		{urn:nbn:de:0030-drops-90401},
  doi =		{10.4230/LIPIcs.ICALP.2018.36},
  annote =	{Keywords: Algebraic circuit complexity, Multilinear formulas, Lower Bounds}
}
Document
Restricted Max-Min Fair Allocation

Authors: Siu-Wing Cheng and Yuchen Mao


Abstract
The restricted max-min fair allocation problem seeks an allocation of resources to players that maximizes the minimum total value obtained by any player. It is NP-hard to approximate the problem to a ratio less than 2. Comparing the current best algorithm for estimating the optimal value with the current best for constructing an allocation, there is quite a gap between the ratios that can be achieved in polynomial time: 4+delta for estimation and 6 + 2 sqrt{10} + delta ~~ 12.325 + delta for construction, where delta is an arbitrarily small constant greater than 0. We propose an algorithm that constructs an allocation with value within a factor 6 + delta from the optimum for any constant delta > 0. The running time is polynomial in the input size for any constant delta chosen.

Cite as

Siu-Wing Cheng and Yuchen Mao. Restricted Max-Min Fair Allocation. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 37:1-37:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cheng_et_al:LIPIcs.ICALP.2018.37,
  author =	{Cheng, Siu-Wing and Mao, Yuchen},
  title =	{{Restricted Max-Min Fair Allocation}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{37:1--37:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.37},
  URN =		{urn:nbn:de:0030-drops-90418},
  doi =		{10.4230/LIPIcs.ICALP.2018.37},
  annote =	{Keywords: Fair allocation, approximation, local search}
}
Document
Improved Approximation for Node-Disjoint Paths in Grids with Sources on the Boundary

Authors: Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat


Abstract
We study the classical Node-Disjoint Paths (NDP) problem: given an undirected n-vertex graph G, together with a set {(s_1,t_1),...,(s_k,t_k)} of pairs of its vertices, called source-destination, or demand pairs, find a maximum-cardinality set {P} of mutually node-disjoint paths that connect the demand pairs. The best current approximation for the problem is achieved by a simple greedy O(sqrt{n})-approximation algorithm. Until recently, the best negative result was an Omega(log^{1/2-epsilon}n)-hardness of approximation, for any fixed epsilon, under standard complexity assumptions. A special case of the problem, where the underlying graph is a grid, has been studied extensively. The best current approximation algorithm for this special case achieves an O~(n^{1/4})-approximation factor. On the negative side, a recent result by the authors shows that NDP is hard to approximate to within factor 2^{Omega(sqrt{log n})}, even if the underlying graph is a subgraph of a grid, and all source vertices lie on the grid boundary. In a very recent follow-up work, the authors further show that NDP in grid graphs is hard to approximate to within factor Omega(2^{log^{1-epsilon}n}) for any constant epsilon under standard complexity assumptions, and to within factor n^{Omega(1/(log log n)^2)} under randomized ETH. In this paper we study the NDP problem in grid graphs, where all source vertices {s_1,...,s_k} appear on the grid boundary. Our main result is an efficient randomized 2^{O(sqrt{log n}* log log n)}-approximation algorithm for this problem. Our result in a sense complements the 2^{Omega(sqrt{log n})}-hardness of approximation for sub-graphs of grids with sources lying on the grid boundary, and should be contrasted with the above-mentioned almost polynomial hardness of approximation of NDP in grid graphs (where the sources and the destinations may lie anywhere in the grid). Much of the work on approximation algorithms for NDP relies on the multicommodity flow relaxation of the problem, which is known to have an Omega(sqrt n) integrality gap, even in grid graphs, with all source and destination vertices lying on the grid boundary. Our work departs from this paradigm, and uses a (completely different) linear program only to select the pairs to be routed, while the routing itself is computed by other methods.

Cite as

Julia Chuzhoy, David H. K. Kim, and Rachit Nimavat. Improved Approximation for Node-Disjoint Paths in Grids with Sources on the Boundary. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 38:1-38:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chuzhoy_et_al:LIPIcs.ICALP.2018.38,
  author =	{Chuzhoy, Julia and Kim, David H. K. and Nimavat, Rachit},
  title =	{{Improved Approximation for Node-Disjoint Paths in Grids with Sources on the Boundary}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{38:1--38:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.38},
  URN =		{urn:nbn:de:0030-drops-90423},
  doi =		{10.4230/LIPIcs.ICALP.2018.38},
  annote =	{Keywords: Node-disjoint paths, approximation algorithms, routing and layout}
}
Document
Optimal Hashing in External Memory

Authors: Alex Conway, Martín Farach-Colton, and Philip Shilane


Abstract
Hash tables are a ubiquitous class of dictionary data structures. However, standard hash table implementations do not translate well into the external memory model, because they do not incorporate locality for insertions. Iacono and Patrasu established an update/query tradeoff curve for external-hash tables: a hash table that performs insertions in O(lambda/B) amortized IOs requires Omega(log_lambda N) expected IOs for queries, where N is the number of items that can be stored in the data structure, B is the size of a memory transfer, M is the size of memory, and lambda is a tuning parameter. They provide a complicated hashing data structure, which we call the IP hash table, that meets this curve for lambda that is Omega(log log M + log_M N). In this paper, we present a simpler external-memory hash table, the Bundle of Arrays Hash Table (BOA), that is optimal for a narrower range of lambda. The simplicity of BOAs allows them to be readily modified to achieve the following results: - A new external-memory data structure, the Bundle of Trees Hash Table (BOT), that matches the performance of the IP hash table, while retaining some of the simplicity of the BOAs. - The Cache-Oblivious Bundle of Trees Hash Table (COBOT), the first cache-oblivious hash table. This data structure matches the optimality of BOTs and IP hash tables over the same range of lambda.

Cite as

Alex Conway, Martín Farach-Colton, and Philip Shilane. Optimal Hashing in External Memory. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 39:1-39:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{conway_et_al:LIPIcs.ICALP.2018.39,
  author =	{Conway, Alex and Farach-Colton, Mart{\'\i}n and Shilane, Philip},
  title =	{{Optimal Hashing in External Memory}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{39:1--39:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.39},
  URN =		{urn:nbn:de:0030-drops-90436},
  doi =		{10.4230/LIPIcs.ICALP.2018.39},
  annote =	{Keywords: hash tables, external memory algorthims, cache-oblivious algorithms, asymmetric data structures}
}
Document
Lovász Meets Weisfeiler and Leman

Authors: Holger Dell, Martin Grohe, and Gaurav Rattan


Abstract
In this paper, we relate a beautiful theory by Lovász with a popular heuristic algorithm for the graph isomorphism problem, namely the color refinement algorithm and its k-dimensional generalization known as the Weisfeiler-Leman algorithm. We prove that two graphs G and H are indistinguishable by the color refinement algorithm if and only if, for all trees T, the number Hom(T,G) of homomorphisms from T to G equals the corresponding number Hom(T,H) for H. There is a natural system of linear equations whose nonnegative integer solutions correspond to the isomorphisms between two graphs. The nonnegative real solutions to this system are called fractional isomorphisms, and two graphs are fractionally isomorphic if and only if the color refinement algorithm cannot distinguish them (Tinhofer 1986, 1991). We show that, if we drop the nonnegativity constraints, that is, if we look for arbitrary real solutions, then a solution to the linear system exists if and only if, for all t, the two graphs have the same number of length-t walks. We lift the results for trees to an equivalence between numbers of homomorphisms from graphs of tree width k, the k-dimensional Weisfeiler-Leman algorithm, and the level-k Sherali-Adams relaxation of our linear program. We also obtain a partial result for graphs of bounded path width and solutions to our system where we drop the nonnegativity constraints. A consequence of our results is a quasi-linear time algorithm to decide whether, for two given graphs G and H, there is a tree T with Hom(T,G)!=Hom(T,H).

Cite as

Holger Dell, Martin Grohe, and Gaurav Rattan. Lovász Meets Weisfeiler and Leman. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dell_et_al:LIPIcs.ICALP.2018.40,
  author =	{Dell, Holger and Grohe, Martin and Rattan, Gaurav},
  title =	{{Lov\'{a}sz Meets Weisfeiler and Leman}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.40},
  URN =		{urn:nbn:de:0030-drops-90444},
  doi =		{10.4230/LIPIcs.ICALP.2018.40},
  annote =	{Keywords: graph isomorphism, graph homomorphism numbers, tree width}
}
Document
Sample-Optimal Identity Testing with High Probability

Authors: Ilias Diakonikolas, Themis Gouleakis, John Peebles, and Eric Price


Abstract
We study the problem of testing identity against a given distribution with a focus on the high confidence regime. More precisely, given samples from an unknown distribution p over n elements, an explicitly given distribution q, and parameters 0< epsilon, delta < 1, we wish to distinguish, with probability at least 1-delta, whether the distributions are identical versus epsilon-far in total variation distance. Most prior work focused on the case that delta = Omega(1), for which the sample complexity of identity testing is known to be Theta(sqrt{n}/epsilon^2). Given such an algorithm, one can achieve arbitrarily small values of delta via black-box amplification, which multiplies the required number of samples by Theta(log(1/delta)). We show that black-box amplification is suboptimal for any delta = o(1), and give a new identity tester that achieves the optimal sample complexity. Our new upper and lower bounds show that the optimal sample complexity of identity testing is Theta((1/epsilon^2) (sqrt{n log(1/delta)} + log(1/delta))) for any n, epsilon, and delta. For the special case of uniformity testing, where the given distribution is the uniform distribution U_n over the domain, our new tester is surprisingly simple: to test whether p = U_n versus d_{TV} (p, U_n) >= epsilon, we simply threshold d_{TV}({p^}, U_n), where {p^} is the empirical probability distribution. The fact that this simple "plug-in" estimator is sample-optimal is surprising, even in the constant delta case. Indeed, it was believed that such a tester would not attain sublinear sample complexity even for constant values of epsilon and delta. An important contribution of this work lies in the analysis techniques that we introduce in this context. First, we exploit an underlying strong convexity property to bound from below the expectation gap in the completeness and soundness cases. Second, we give a new, fast method for obtaining provably correct empirical estimates of the true worst-case failure probability for a broad class of uniformity testing statistics over all possible input distributions - including all previously studied statistics for this problem. We believe that our novel analysis techniques will be useful for other distribution testing problems as well.

Cite as

Ilias Diakonikolas, Themis Gouleakis, John Peebles, and Eric Price. Sample-Optimal Identity Testing with High Probability. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{diakonikolas_et_al:LIPIcs.ICALP.2018.41,
  author =	{Diakonikolas, Ilias and Gouleakis, Themis and Peebles, John and Price, Eric},
  title =	{{Sample-Optimal Identity Testing with High Probability}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.41},
  URN =		{urn:nbn:de:0030-drops-90459},
  doi =		{10.4230/LIPIcs.ICALP.2018.41},
  annote =	{Keywords: distribution testing, property testing, sample complexity}
}
Document
Approximating All-Pair Bounded-Leg Shortest Path and APSP-AF in Truly-Subcubic Time

Authors: Ran Duan and Hanlin Ren


Abstract
In the bounded-leg shortest path (BLSP) problem, we are given a weighted graph G with nonnegative edge lengths, and we want to answer queries of the form "what's the shortest path from u to v, where only edges of length <=L are considered?". A more general problem is the APSP-AF (all-pair shortest path for all flows) problem, in which each edge has two weights - a length d and a capacity f, and a query asks about the shortest path from u to v where only edges of capacity >= f are considered. In this article we give an O~(n^{(omega+3)/2}epsilon^{-3/2}log W) time algorithm to compute a data structure that answers APSP-AF queries in O(log(epsilon^{-1}log (nW))) time and achieves (1+epsilon)-approximation, where omega < 2.373 is the exponent of time complexity of matrix multiplication, W is the upper bound of integer edge lengths, and n is the number of vertices. This is the first truly-subcubic time algorithm for these problems on dense graphs. Our algorithm utilizes the O(n^{(omega+3)/2}) time max-min product algorithm [Duan and Pettie 2009]. Since the all-pair bottleneck path (APBP) problem, which is equivalent to max-min product, can be seen as all-pair reachability for all flow, our approach indeed shows that these problems are almost equivalent in the approximation sense.

Cite as

Ran Duan and Hanlin Ren. Approximating All-Pair Bounded-Leg Shortest Path and APSP-AF in Truly-Subcubic Time. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 42:1-42:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2018.42,
  author =	{Duan, Ran and Ren, Hanlin},
  title =	{{Approximating All-Pair Bounded-Leg Shortest Path and APSP-AF in Truly-Subcubic Time}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{42:1--42:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.42},
  URN =		{urn:nbn:de:0030-drops-90467},
  doi =		{10.4230/LIPIcs.ICALP.2018.42},
  annote =	{Keywords: Graph Theory, Approximation Algorithms, Combinatorial Optimization}
}
Document
Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs

Authors: Ran Duan, Kaifeng Lyu, and Yuanhang Xie


Abstract
In a directed graph G=(V,E) with a capacity on every edge, a bottleneck path (or widest path) between two vertices is a path maximizing the minimum capacity of edges in the path. For the single-source all-destination version of this problem in directed graphs, the previous best algorithm runs in O(m+n log n) (m=|E| and n=|V|) time, by Dijkstra search with Fibonacci heap [Fredman and Tarjan 1987]. We improve this time bound to O(m sqrt{log n}+sqrt{mn log n log log n}), which is O(n sqrt{log n log log n}) when m=O(n), thus it is the first algorithm which breaks the time bound of classic Fibonacci heap when m=o(n sqrt{log n}). It is a Las-Vegas randomized approach. By contrast, the s-t bottleneck path has algorithm with running time O(m beta(m,n)) [Chechik et al. 2016], where beta(m,n)=min{k >= 1: log^{(k)}n <= m/n}.

Cite as

Ran Duan, Kaifeng Lyu, and Yuanhang Xie. Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2018.43,
  author =	{Duan, Ran and Lyu, Kaifeng and Xie, Yuanhang},
  title =	{{Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.43},
  URN =		{urn:nbn:de:0030-drops-90475},
  doi =		{10.4230/LIPIcs.ICALP.2018.43},
  annote =	{Keywords: Graph Algorithm, Bottleneck Path, Combinatorial Optimization}
}
Document
Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs

Authors: Ran Duan, Yong Gu, and Le Zhang


Abstract
We present improved algorithms for solving the All Pairs Non-decreasing Paths (APNP) problem on weighted digraphs. Currently, the best upper bound on APNP is O~(n^{(9+omega)/4})=O(n^{2.844}), obtained by Vassilevska Williams [TALG 2010 and SODA'08], where omega<2.373 is the usual exponent of matrix multiplication. Our first algorithm improves the time bound to O~(n^{2+omega/3})=O(n^{2.791}). The algorithm determines, for every pair of vertices s, t, the minimum last edge weight on a non-decreasing path from s to t, where a non-decreasing path is a path on which the edge weights form a non-decreasing sequence. The algorithm proposed uses the combinatorial properties of non-decreasing paths. Also a slightly improved algorithm with running time O(n^{2.78}) is presented.

Cite as

Ran Duan, Yong Gu, and Le Zhang. Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 44:1-44:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duan_et_al:LIPIcs.ICALP.2018.44,
  author =	{Duan, Ran and Gu, Yong and Zhang, Le},
  title =	{{Improved Time Bounds for All Pairs Non-decreasing Paths in General Digraphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{44:1--44:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.44},
  URN =		{urn:nbn:de:0030-drops-90487},
  doi =		{10.4230/LIPIcs.ICALP.2018.44},
  annote =	{Keywords: Graph algorithms, Matrix multiplication, Non-decreasing paths}
}
Document
Edit Distance between Unrooted Trees in Cubic Time

Authors: Bartlomiej Dudek and Pawel Gawrychowski


Abstract
Edit distance between trees is a natural generalization of the classical edit distance between strings, in which the allowed elementary operations are contraction, uncontraction and relabeling of an edge. Demaine et al. [ACM Trans. on Algorithms, 6(1), 2009] showed how to compute the edit distance between rooted trees on n nodes in O(n^3) time. However, generalizing their method to unrooted trees seems quite problematic, and the most efficient known solution remains to be the previous O(n^3 log n) time algorithm by Klein [ESA 1998]. Given the lack of progress on improving this complexity, it might appear that unrooted trees are simply more difficult than rooted trees. We show that this is, in fact, not the case, and edit distance between unrooted trees on n nodes can be computed in O(n^3) time. A significantly faster solution is unlikely to exist, as Bringmann et al. [SODA 2018] proved that the complexity of computing the edit distance between rooted trees cannot be decreased to O(n^{3-epsilon}) unless some popular conjecture fails, and the lower bound easily extends to unrooted trees. We also show that for two unrooted trees of size m and n, where m <=n, our algorithm can be modified to run in O(nm^2(1+log(n/m))). This, again, matches the complexity achieved by Demaine et al. for rooted trees, who also showed that this is optimal if we restrict ourselves to the so-called decomposition algorithms.

Cite as

Bartlomiej Dudek and Pawel Gawrychowski. Edit Distance between Unrooted Trees in Cubic Time. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dudek_et_al:LIPIcs.ICALP.2018.45,
  author =	{Dudek, Bartlomiej and Gawrychowski, Pawel},
  title =	{{Edit Distance between Unrooted Trees in Cubic Time}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.45},
  URN =		{urn:nbn:de:0030-drops-90492},
  doi =		{10.4230/LIPIcs.ICALP.2018.45},
  annote =	{Keywords: tree edit distance, dynamic programming, heavy light decomposition}
}
Document
A Note on Two-Colorability of Nonuniform Hypergraphs

Authors: Lech Duraj, Grzegorz Gutowski, and Jakub Kozik


Abstract
For a hypergraph H, let q(H) denote the expected number of monochromatic edges when the color of each vertex in H is sampled uniformly at random from the set of size 2. Let s_{min}(H) denote the minimum size of an edge in H. Erdös asked in 1963 whether there exists an unbounded function g(k) such that any hypergraph H with s_{min}(H) >=slant k and q(H) <=slant g(k) is two colorable. Beck in 1978 answered this question in the affirmative for a function g(k) = Theta(log^* k). We improve this result by showing that, for an absolute constant delta>0, a version of random greedy coloring procedure is likely to find a proper two coloring for any hypergraph H with s_{min}(H) >=slant k and q(H) <=slant delta * log k.

Cite as

Lech Duraj, Grzegorz Gutowski, and Jakub Kozik. A Note on Two-Colorability of Nonuniform Hypergraphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{duraj_et_al:LIPIcs.ICALP.2018.46,
  author =	{Duraj, Lech and Gutowski, Grzegorz and Kozik, Jakub},
  title =	{{A Note on Two-Colorability of Nonuniform Hypergraphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.46},
  URN =		{urn:nbn:de:0030-drops-90505},
  doi =		{10.4230/LIPIcs.ICALP.2018.46},
  annote =	{Keywords: Property B, Nonuniform Hypergraphs, Hypergraph Coloring, Random Greedy Coloring}
}
Document
Additive Non-Approximability of Chromatic Number in Proper Minor-Closed Classes

Authors: Zdenek Dvorák and Ken-ichi Kawarabayashi


Abstract
Robin Thomas asked whether for every proper minor-closed class G, there exists a polynomial-time algorithm approximating the chromatic number of graphs from G up to a constant additive error independent on the class G. We show this is not the case: unless P=NP, for every integer k >= 1, there is no polynomial-time algorithm to color a K_{4k+1}-minor-free graph G using at most chi(G)+k-1 colors. More generally, for every k >= 1 and 1 <=beta <=4/3, there is no polynomial-time algorithm to color a K_{4k+1}-minor-free graph G using less than beta chi(G)+(4-3 beta)k colors. As far as we know, this is the first non-trivial non-approximability result regarding the chromatic number in proper minor-closed classes. We also give somewhat weaker non-approximability bound for K_{4k+1}-minor-free graphs with no cliques of size 4. On the positive side, we present an additive approximation algorithm whose error depends on the apex number of the forbidden minor, and an algorithm with additive error 6 under the additional assumption that the graph has no 4-cycles.

Cite as

Zdenek Dvorák and Ken-ichi Kawarabayashi. Additive Non-Approximability of Chromatic Number in Proper Minor-Closed Classes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 47:1-47:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dvorak_et_al:LIPIcs.ICALP.2018.47,
  author =	{Dvor\'{a}k, Zdenek and Kawarabayashi, Ken-ichi},
  title =	{{Additive Non-Approximability of Chromatic Number in Proper Minor-Closed Classes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{47:1--47:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.47},
  URN =		{urn:nbn:de:0030-drops-90510},
  doi =		{10.4230/LIPIcs.ICALP.2018.47},
  annote =	{Keywords: non-approximability, chromatic number, minor-closed classes}
}
Document
How to Navigate Through Obstacles?

Authors: Eduard Eiben and Iyad Kanj


Abstract
Given a set of obstacles and two points in the plane, is there a path between the two points that does not cross more than k different obstacles? This is a fundamental problem that has undergone a tremendous amount of work by researchers in various areas, including computational geometry, graph theory, wireless computing, and motion planning. It is known to be NP-hard, even when the obstacles are very simple geometric shapes (e.g., unit-length line segments). The problem can be formulated and generalized into the following graph problem: Given a planar graph G whose vertices are colored by color sets, two designated vertices s, t in V(G), and k in N, is there an s-t path in G that uses at most k colors? If each obstacle is connected, the resulting graph satisfies the color-connectivity property, namely that each color induces a connected subgraph. We study the complexity and design algorithms for the above graph problem with an eye on its geometric applications. We prove a set of hardness results, among which a result showing that the color-connectivity property is crucial for any hope for fixed-parameter tractable (FPT) algorithms, as without it, the problem is W[SAT]-hard parameterized by k. Previous results only implied that the problem is W[2]-hard. A corollary of this result is that, unless W[2] = FPT, the problem cannot be approximated in FPT time to within a factor that is a function of k. By describing a generic plane embedding of the graph instances, we show that our hardness results translate to the geometric instances of the problem. We then focus on graphs satisfying the color-connectivity property. By exploiting the planarity of the graph and the connectivity of the colors, we develop topological results that allow us to prove that, for any vertex v, there exists a set of paths whose cardinality is upper bounded by a function of k, that "represents" the valid s-t paths containing subsets of colors from v. We employ these structural results to design an FPT algorithm for the problem parameterized by both k and the treewidth of the graph, and extend this result further to obtain an FPT algorithm for the parameterization by both k and the length of the path. The latter result generalizes and explains previous FPT results for various obstacle shapes, such as unit disks and fat regions.

Cite as

Eduard Eiben and Iyad Kanj. How to Navigate Through Obstacles?. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 48:1-48:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.ICALP.2018.48,
  author =	{Eiben, Eduard and Kanj, Iyad},
  title =	{{How to Navigate Through Obstacles?}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{48:1--48:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.48},
  URN =		{urn:nbn:de:0030-drops-90528},
  doi =		{10.4230/LIPIcs.ICALP.2018.48},
  annote =	{Keywords: parameterized complexity and algorithms, motion planning, barrier coverage, barrier resilience, colored path, minimum constraint removal, planar graphs}
}
Document
Faster Algorithms for Integer Programs with Block Structure

Authors: Friedrich Eisenbrand, Christoph Hunkenschröder, and Kim-Manuel Klein


Abstract
We consider integer programming problems max {c^Tx : A x = b, l <= x <= u, x in Z^{nt}} where A has a (recursive) block-structure generalizing n-fold integer programs which recently received considerable attention in the literature. An n-fold IP is an integer program where A consists of n repetitions of submatrices A in Z^{r × t} on the top horizontal part and n repetitions of a matrix B in Z^{s × t} on the diagonal below the top part. Instead of allowing only two types of block matrices, one for the horizontal line and one for the diagonal, we generalize the n-fold setting to allow for arbitrary matrices in every block. We show that such an integer program can be solved in time n^2t^2 phi x (r s delta)^{O(rs^2+ sr^2)} (ignoring logarithmic factors). Here delta is an upper bound on the largest absolute value of an entry of A and phi is the largest binary encoding length of a coefficient of c. This improves upon the previously best algorithm of Hemmecke, Onn and Romanchuk that runs in time n^3t^3 phi x delta^{O(st(r+t))}. In particular, our algorithm is not exponential in the number t of columns of A and B. Our algorithm is based on a new upper bound on the l_1-norm of an element of the Graver basis of an integer matrix and on a proximity bound between the LP and IP optimal solutions tailored for IPs with block structure. These new bounds rely on the Steinitz Lemma. Furthermore, we extend our techniques to the recently introduced tree-fold IPs, where we again present a more efficient algorithm in a generalized setting.

Cite as

Friedrich Eisenbrand, Christoph Hunkenschröder, and Kim-Manuel Klein. Faster Algorithms for Integer Programs with Block Structure. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{eisenbrand_et_al:LIPIcs.ICALP.2018.49,
  author =	{Eisenbrand, Friedrich and Hunkenschr\"{o}der, Christoph and Klein, Kim-Manuel},
  title =	{{Faster Algorithms for Integer Programs with Block Structure}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.49},
  URN =		{urn:nbn:de:0030-drops-90537},
  doi =		{10.4230/LIPIcs.ICALP.2018.49},
  annote =	{Keywords: n-fold, Tree-fold, Integer Programming}
}
Document
On the Probe Complexity of Local Computation Algorithms

Authors: Uriel Feige, Boaz Patt-Shamir, and Shai Vardi


Abstract
In the Local Computation Algorithms (LCA) model, the algorithm is asked to compute a part of the output by reading as little as possible from the input. For example, an LCA for coloring a graph is given a vertex name (as a "query"), and it should output the color assigned to that vertex after inquiring about some part of the graph topology using "probes"; all outputs must be consistent with the same coloring. LCAs are useful when the input is huge, and the output as a whole is not needed simultaneously. Most previous work on LCAs was limited to bounded-degree graphs, which seems inevitable because probes are of the form "what vertex is at the other end of edge i of vertex v?". In this work we study LCAs for unbounded-degree graphs. In particular, such LCAs are expected to probe the graph a number of times that is significantly smaller than the maximum, average, or even minimum degree. We show that there are problems that have very efficient LCAs on any graph - specifically, we show that there is an LCA for the weak coloring problem (where a coloring is legal if every vertex has a neighbor with a different color) that uses log^* n+O(1) probes to reply to any query. As another way of dealing with large degrees, we propose a more powerful type of probe which we call a strong probe: given a vertex name, it returns a list of its neighbors. Lower bounds for strong probes are stronger than ones in the edge probe model (which we call weak probes). Our main result in this model is that roughly Omega(sqrt{n}) strong probes are required to compute a maximal matching. Our findings include interesting separations between closely related problems. For weak probes, we show that while weak 3-coloring can be done with probe complexity log^* n+O(1), weak 2-coloring has probe complexity Omega(log n/log log n). For strong probes, our negative result for maximal matching is complemented by an LCA for (1-epsilon)-approximate maximum matching on regular graphs that uses O(1) strong probes, for any constant epsilon>0.

Cite as

Uriel Feige, Boaz Patt-Shamir, and Shai Vardi. On the Probe Complexity of Local Computation Algorithms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{feige_et_al:LIPIcs.ICALP.2018.50,
  author =	{Feige, Uriel and Patt-Shamir, Boaz and Vardi, Shai},
  title =	{{On the Probe Complexity of Local Computation Algorithms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.50},
  URN =		{urn:nbn:de:0030-drops-90543},
  doi =		{10.4230/LIPIcs.ICALP.2018.50},
  annote =	{Keywords: Local computation algorithms, sublinear algorithms}
}
Document
Fully-Dynamic Bin Packing with Little Repacking

Authors: Björn Feldkord, Matthias Feldotto, Anupam Gupta, Guru Guruganesh, Amit Kumar, Sören Riechers, and David Wajc


Abstract
We study the classic bin packing problem in a fully-dynamic setting, where new items can arrive and old items may depart. We want algorithms with low asymptotic competitive ratio while repacking items sparingly between updates. Formally, each item i has a movement cost c_i >= 0, and we want to use alpha * OPT bins and incur a movement cost gamma * c_i, either in the worst case, or in an amortized sense, for alpha, gamma as small as possible. We call gamma the recourse of the algorithm. This is motivated by cloud storage applications, where fully-dynamic bin packing models the problem of data backup to minimize the number of disks used, as well as communication incurred in moving file backups between disks. Since the set of files changes over time, we could recompute a solution periodically from scratch, but this would give a high number of disk rewrites, incurring a high energy cost and possible wear and tear of the disks. In this work, we present optimal tradeoffs between number of bins used and number of items repacked, as well as natural extensions of the latter measure.

Cite as

Björn Feldkord, Matthias Feldotto, Anupam Gupta, Guru Guruganesh, Amit Kumar, Sören Riechers, and David Wajc. Fully-Dynamic Bin Packing with Little Repacking. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 51:1-51:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{feldkord_et_al:LIPIcs.ICALP.2018.51,
  author =	{Feldkord, Bj\"{o}rn and Feldotto, Matthias and Gupta, Anupam and Guruganesh, Guru and Kumar, Amit and Riechers, S\"{o}ren and Wajc, David},
  title =	{{Fully-Dynamic Bin Packing with Little Repacking}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{51:1--51:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.51},
  URN =		{urn:nbn:de:0030-drops-90556},
  doi =		{10.4230/LIPIcs.ICALP.2018.51},
  annote =	{Keywords: Bin Packing, Fully Dynamic, Recourse, Tradeoffs}
}
Document
A Sublinear Tester for Outerplanarity (and Other Forbidden Minors) With One-Sided Error

Authors: Hendrik Fichtenberger, Reut Levi, Yadu Vasudev, and Maximilian Wötzel


Abstract
We consider one-sided error property testing of F-minor freeness in bounded-degree graphs for any finite family of graphs F that contains a minor of K_{2,k}, the k-circus graph, or the (k x 2)-grid for any k in N. This includes, for instance, testing whether a graph is outerplanar or a cactus graph. The query complexity of our algorithm in terms of the number of vertices in the graph, n, is O~(n^{2/3} / epsilon^5). Czumaj et al. (2014) showed that cycle-freeness and C_k-minor freeness can be tested with query complexity O~(sqrt{n}) by using random walks, and that testing H-minor freeness for any H that contains a cycles requires Omega(sqrt{n}) queries. In contrast to these results, we analyze the structure of the graph and show that either we can find a subgraph of sublinear size that includes the forbidden minor H, or we can find a pair of disjoint subsets of vertices whose edge-cut is large, which induces an H-minor.

Cite as

Hendrik Fichtenberger, Reut Levi, Yadu Vasudev, and Maximilian Wötzel. A Sublinear Tester for Outerplanarity (and Other Forbidden Minors) With One-Sided Error. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fichtenberger_et_al:LIPIcs.ICALP.2018.52,
  author =	{Fichtenberger, Hendrik and Levi, Reut and Vasudev, Yadu and W\"{o}tzel, Maximilian},
  title =	{{A Sublinear Tester for Outerplanarity (and Other Forbidden Minors) With One-Sided Error}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.52},
  URN =		{urn:nbn:de:0030-drops-90563},
  doi =		{10.4230/LIPIcs.ICALP.2018.52},
  annote =	{Keywords: graph property testing, minor-free graphs}
}
Document
Parameterized Low-Rank Binary Matrix Approximation

Authors: Fedor V. Fomin, Petr A. Golovach, and Fahad Panolan


Abstract
We provide a number of algorithmic results for the following family of problems: For a given binary m x n matrix A and a nonnegative integer k, decide whether there is a "simple" binary matrix B which differs from A in at most k entries. For an integer r, the "simplicity" of B is characterized as follows. - Binary r-Means: Matrix B has at most r different columns. This problem is known to be NP-complete already for r=2. We show that the problem is solvable in time 2^{O(k log k)}*(nm)^O(1) and thus is fixed-parameter tractable parameterized by k. We also complement this result by showing that when being parameterized by r and k, the problem admits an algorithm of running time 2^{O(r^{3/2}* sqrt{k log k})}(nm)^O(1), which is subexponential in k for r in o((k/log k)^{1/3}). - Low GF(2)-Rank Approximation: Matrix B is of GF(2)-rank at most r. This problem is known to be NP-complete already for r=1. It is also known to be W[1]-hard when parameterized by k. Interestingly, when parameterized by r and k, the problem is not only fixed-parameter tractable, but it is solvable in time 2^{O(r^{3/2}* sqrt{k log k})}(nm)^O(1), which is subexponential in k for r in o((k/log k)^{1/3}). - Low Boolean-Rank Approximation: Matrix B is of Boolean rank at most r. The problem is known to be NP-complete for k=0 as well as for r=1. We show that it is solvable in subexponential in k time 2^{O(r2^r * sqrt{k log k})}(nm)^O(1).

Cite as

Fedor V. Fomin, Petr A. Golovach, and Fahad Panolan. Parameterized Low-Rank Binary Matrix Approximation. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ICALP.2018.53,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Panolan, Fahad},
  title =	{{Parameterized Low-Rank Binary Matrix Approximation}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{53:1--53:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.53},
  URN =		{urn:nbn:de:0030-drops-90571},
  doi =		{10.4230/LIPIcs.ICALP.2018.53},
  annote =	{Keywords: Binary matrices, clustering, low-rank approximation, fixed-parameter tractability}
}
Document
Towards Blackbox Identity Testing of Log-Variate Circuits

Authors: Michael A. Forbes, Sumanta Ghosh, and Nitin Saxena


Abstract
Derandomization of blackbox identity testing reduces to extremely special circuit models. After a line of work, it is known that focusing on circuits with constant-depth and constantly many variables is enough (Agrawal,Ghosh,Saxena, STOC'18) to get to general hitting-sets and circuit lower bounds. This inspires us to study circuits with few variables, eg. logarithmic in the size s. We give the first poly(s)-time blackbox identity test for n=O(log s) variate size-s circuits that have poly(s)-dimensional partial derivative space; eg. depth-3 diagonal circuits (or Sigma wedge Sigma^n). The former model is well-studied (Nisan,Wigderson, FOCS'95) but no poly(s2^n)-time identity test was known before us. We introduce the concept of cone-closed basis isolation and prove its usefulness in studying log-variate circuits. It subsumes the previous notions of rank-concentration studied extensively in the context of ROABP models.

Cite as

Michael A. Forbes, Sumanta Ghosh, and Nitin Saxena. Towards Blackbox Identity Testing of Log-Variate Circuits. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 54:1-54:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{forbes_et_al:LIPIcs.ICALP.2018.54,
  author =	{Forbes, Michael A. and Ghosh, Sumanta and Saxena, Nitin},
  title =	{{Towards Blackbox Identity Testing of Log-Variate Circuits}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{54:1--54:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.54},
  URN =		{urn:nbn:de:0030-drops-90582},
  doi =		{10.4230/LIPIcs.ICALP.2018.54},
  annote =	{Keywords: hitting-set, depth-3, diagonal, derandomization, polynomial identity testing, log-variate, concentration, cone closed, basis isolation}
}
Document
Finding Cliques in Social Networks: A New Distribution-Free Model

Authors: Jacob Fox, Tim Roughgarden, C. Seshadhri, Fan Wei, and Nicole Wein


Abstract
We propose a new distribution-free model of social networks. Our definitions are motivated by one of the most universal signatures of social networks, triadic closure - the property that pairs of vertices with common neighbors tend to be adjacent. Our most basic definition is that of a c-closed graph, where for every pair of vertices u,v with at least c common neighbors, u and v are adjacent. We study the classic problem of enumerating all maximal cliques, an important task in social network analysis. We prove that this problem is fixed-parameter tractable with respect to c on c-closed graphs. Our results carry over to weakly c-closed graphs, which only require a vertex deletion ordering that avoids pairs of non-adjacent vertices with c common neighbors. Numerical experiments show that well-studied social networks tend to be weakly c-closed for modest values of c.

Cite as

Jacob Fox, Tim Roughgarden, C. Seshadhri, Fan Wei, and Nicole Wein. Finding Cliques in Social Networks: A New Distribution-Free Model. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 55:1-55:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fox_et_al:LIPIcs.ICALP.2018.55,
  author =	{Fox, Jacob and Roughgarden, Tim and Seshadhri, C. and Wei, Fan and Wein, Nicole},
  title =	{{Finding Cliques in Social Networks: A New Distribution-Free Model}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{55:1--55:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.55},
  URN =		{urn:nbn:de:0030-drops-90596},
  doi =		{10.4230/LIPIcs.ICALP.2018.55},
  annote =	{Keywords: Graph algorithms, social networks, fixed-parameter tractability}
}
Document
A PTAS for a Class of Stochastic Dynamic Programs

Authors: Hao Fu, Jian Li, and Pan Xu


Abstract
We develop a framework for obtaining polynomial time approximation schemes (PTAS) for a class of stochastic dynamic programs. Using our framework, we obtain the first PTAS for the following stochastic combinatorial optimization problems: 1) Probemax [Munagala, 2016]: We are given a set of n items, each item i in [n] has a value X_i which is an independent random variable with a known (discrete) distribution pi_i. We can probe a subset P subseteq [n] of items sequentially. Each time after {probing} an item i, we observe its value realization, which follows the distribution pi_i. We can adaptively probe at most m items and each item can be probed at most once. The reward is the maximum among the m realized values. Our goal is to design an adaptive probing policy such that the expected value of the reward is maximized. To the best of our knowledge, the best known approximation ratio is 1-1/e, due to Asadpour et al. [Asadpour and Nazerzadeh, 2015]. We also obtain PTAS for some generalizations and variants of the problem. 2) Committed Pandora's Box [Weitzman, 1979; Singla, 2018]: We are given a set of n boxes. For each box i in [n], the cost c_i is deterministic and the value X_i is an independent random variable with a known (discrete) distribution pi_i. Opening a box i incurs a cost of c_i. We can adaptively choose to open the boxes (and observe their values) or stop. We want to maximize the expectation of the realized value of the last opened box minus the total opening cost. 3) Stochastic Target [{I}lhan et al., 2011]: Given a predetermined target T and n items, we can adaptively insert the items into a knapsack and insert at most m items. Each item i has a value X_i which is an independent random variable with a known (discrete) distribution. Our goal is to design an adaptive policy such that the probability of the total values of all items inserted being larger than or equal to T is maximized. We provide the first bi-criteria PTAS for the problem. 4) Stochastic Blackjack Knapsack [Levin and Vainer, 2014]: We are given a knapsack of capacity C and probability distributions of n independent random variables X_i. Each item i in [n] has a size X_i and a profit p_i. We can adaptively insert the items into a knapsack, as long as the capacity constraint is not violated. We want to maximize the expected total profit of all inserted items. If the capacity constraint is violated, we lose all the profit. We provide the first bi-criteria PTAS for the problem.

Cite as

Hao Fu, Jian Li, and Pan Xu. A PTAS for a Class of Stochastic Dynamic Programs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.ICALP.2018.56,
  author =	{Fu, Hao and Li, Jian and Xu, Pan},
  title =	{{A PTAS for a Class of Stochastic Dynamic Programs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.56},
  URN =		{urn:nbn:de:0030-drops-90609},
  doi =		{10.4230/LIPIcs.ICALP.2018.56},
  annote =	{Keywords: stochastic optimization, dynamic program, markov decision process, block policy, approximation algorithm}
}
Document
Semi-Supervised Algorithms for Approximately Optimal and Accurate Clustering

Authors: Buddhima Gamlath, Sangxia Huang, and Ola Svensson


Abstract
We study k-means clustering in a semi-supervised setting. Given an oracle that returns whether two given points belong to the same cluster in a fixed optimal clustering, we investigate the following question: how many oracle queries are sufficient to efficiently recover a clustering that, with probability at least (1 - delta), simultaneously has a cost of at most (1 + epsilon) times the optimal cost and an accuracy of at least (1 - epsilon)? We show how to achieve such a clustering on n points with O{((k^2 log n) * m{(Q, epsilon^4, delta / (k log n))})} oracle queries, when the k clusters can be learned with an epsilon' error and a failure probability delta' using m(Q, epsilon',delta') labeled samples in the supervised setting, where Q is the set of candidate cluster centers. We show that m(Q, epsilon', delta') is small both for k-means instances in Euclidean space and for those in finite metric spaces. We further show that, for the Euclidean k-means instances, we can avoid the dependency on n in the query complexity at the expense of an increased dependency on k: specifically, we give a slightly more involved algorithm that uses O{(k^4/(epsilon^2 delta) + (k^{9}/epsilon^4) log(1/delta) + k * m{({R}^r, epsilon^4/k, delta)})} oracle queries. We also show that the number of queries needed for (1 - epsilon)-accuracy in Euclidean k-means must linearly depend on the dimension of the underlying Euclidean space, and for finite metric space k-means, we show that it must at least be logarithmic in the number of candidate centers. This shows that our query complexities capture the right dependencies on the respective parameters.

Cite as

Buddhima Gamlath, Sangxia Huang, and Ola Svensson. Semi-Supervised Algorithms for Approximately Optimal and Accurate Clustering. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gamlath_et_al:LIPIcs.ICALP.2018.57,
  author =	{Gamlath, Buddhima and Huang, Sangxia and Svensson, Ola},
  title =	{{Semi-Supervised Algorithms for Approximately Optimal and Accurate Clustering}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.57},
  URN =		{urn:nbn:de:0030-drops-90612},
  doi =		{10.4230/LIPIcs.ICALP.2018.57},
  annote =	{Keywords: Clustering, Semi-supervised Learning, Approximation Algorithms, k-Means, k-Median}
}
Document
High Probability Frequency Moment Sketches

Authors: Sumit Ganguly and David P. Woodruff


Abstract
We consider the problem of sketching the p-th frequency moment of a vector, p>2, with multiplicative error at most 1 +/- epsilon and with high confidence 1-delta. Despite the long sequence of work on this problem, tight bounds on this quantity are only known for constant delta. While one can obtain an upper bound with error probability delta by repeating a sketching algorithm with constant error probability O(log(1/delta)) times in parallel, and taking the median of the outputs, we show this is a suboptimal algorithm! Namely, we show optimal upper and lower bounds of Theta(n^{1-2/p} log(1/delta) + n^{1-2/p} log^{2/p} (1/delta) log n) on the sketching dimension, for any constant approximation. Our result should be contrasted with results for estimating frequency moments for 1 <= p <= 2, for which we show the optimal algorithm for general delta is obtained by repeating the optimal algorithm for constant error probability O(log(1/delta)) times and taking the median output. We also obtain a matching lower bound for this problem, up to constant factors.

Cite as

Sumit Ganguly and David P. Woodruff. High Probability Frequency Moment Sketches. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 58:1-58:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ganguly_et_al:LIPIcs.ICALP.2018.58,
  author =	{Ganguly, Sumit and Woodruff, David P.},
  title =	{{High Probability Frequency Moment Sketches}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{58:1--58:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.58},
  URN =		{urn:nbn:de:0030-drops-90623},
  doi =		{10.4230/LIPIcs.ICALP.2018.58},
  annote =	{Keywords: Data Streams, Frequency Moments, High Confidence}
}
Document
Quasi-PTAS for Scheduling with Precedences using LP Hierarchies

Authors: Shashwat Garg


Abstract
A central problem in scheduling is to schedule n unit size jobs with precedence constraints on m identical machines so as to minimize the makespan. For m=3, it is not even known if the problem is NP-hard and this is one of the last open problems from the book of Garey and Johnson. We show that for fixed m and epsilon, {polylog}(n) rounds of Sherali-Adams hierarchy applied to a natural LP of the problem provides a (1+epsilon)-approximation algorithm running in quasi-polynomial time. This improves over the recent result of Levey and Rothvoss, who used r=(log n)^{O(log log n)} rounds of Sherali-Adams in order to get a (1+epsilon)-approximation algorithm with a running time of n^O(r).

Cite as

Shashwat Garg. Quasi-PTAS for Scheduling with Precedences using LP Hierarchies. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 59:1-59:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{garg:LIPIcs.ICALP.2018.59,
  author =	{Garg, Shashwat},
  title =	{{Quasi-PTAS for Scheduling with Precedences using LP Hierarchies}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{59:1--59:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.59},
  URN =		{urn:nbn:de:0030-drops-90638},
  doi =		{10.4230/LIPIcs.ICALP.2018.59},
  annote =	{Keywords: Approximation algorithms, hierarchies, scheduling, rounding techniques}
}
Document
ARRIVAL: Next Stop in CLS

Authors: Bernd Gärtner, Thomas Dueholm Hansen, Pavel Hubácek, Karel Král, Hagar Mosaad, and Veronika Slívová


Abstract
We study the computational complexity of Arrival, a zero-player game on n-vertex switch graphs introduced by Dohrau, Gärtner, Kohler, Matousek, and Welzl. They showed that the problem of deciding termination of this game is contained in NP n coNP. Karthik C. S. recently introduced a search variant of Arrival and showed that it is in the complexity class PLS. In this work, we significantly improve the known upper bounds for both the decision and the search variants of Arrival. First, we resolve a question suggested by Dohrau et al. and show that the decision variant of Arrival is in UP n coUP. Second, we prove that the search variant of Arrival is contained in CLS. Third, we give a randomized O(1.4143^n)-time algorithm to solve both variants. Our main technical contributions are (a) an efficiently verifiable characterization of the unique witness for termination of the Arrival game, and (b) an efficient way of sampling from the state space of the game. We show that the problem of finding the unique witness is contained in CLS, whereas it was previously conjectured to be FPSPACE-complete. The efficient sampling procedure yields the first algorithm for the problem that has expected runtime O(c^n) with c<2.

Cite as

Bernd Gärtner, Thomas Dueholm Hansen, Pavel Hubácek, Karel Král, Hagar Mosaad, and Veronika Slívová. ARRIVAL: Next Stop in CLS. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 60:1-60:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gartner_et_al:LIPIcs.ICALP.2018.60,
  author =	{G\"{a}rtner, Bernd and Hansen, Thomas Dueholm and Hub\'{a}cek, Pavel and Kr\'{a}l, Karel and Mosaad, Hagar and Sl{\'\i}vov\'{a}, Veronika},
  title =	{{ARRIVAL: Next Stop in CLS}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{60:1--60:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.60},
  URN =		{urn:nbn:de:0030-drops-90641},
  doi =		{10.4230/LIPIcs.ICALP.2018.60},
  annote =	{Keywords: CLS, switch graphs, zero-player game, UP n coUP}
}
Document
Improved Bounds for Shortest Paths in Dense Distance Graphs

Authors: Pawel Gawrychowski and Adam Karczmarz


Abstract
We study the problem of computing shortest paths in so-called dense distance graphs, a basic building block for designing efficient planar graph algorithms. Let G be a plane graph with a distinguished set partial{G} of boundary vertices lying on a constant number of faces of G. A distance clique of G is a complete graph on partial{G} encoding all-pairs distances between these vertices. A dense distance graph is a union of possibly many unrelated distance cliques. Fakcharoenphol and Rao [Fakcharoenphol and Rao, 2006] proposed an efficient implementation of Dijkstra's algorithm (later called FR-Dijkstra) computing single-source shortest paths in a dense distance graph. Their algorithm spends O(b log^2{n}) time per distance clique with b vertices, even though a clique has b^2 edges. Here, n is the total number of vertices of the dense distance graph. The invention of FR-Dijkstra was instrumental in obtaining such results for planar graphs as nearly-linear time algorithms for multiple-source-multiple-sink maximum flow and dynamic distance oracles with sublinear update and query bounds. At the heart of FR-Dijkstra lies a data structure updating distance labels and extracting minimum labeled vertices in O(log^2{n}) amortized time per vertex. We show an improved data structure with O((log^2{n})/(log^2 log n)) amortized bounds. This is the first improvement over the data structure of Fakcharoenphol and Rao in more than 15 years. It yields improved bounds for all problems on planar graphs, for which computing shortest paths in dense distance graphs is currently a bottleneck.

Cite as

Pawel Gawrychowski and Adam Karczmarz. Improved Bounds for Shortest Paths in Dense Distance Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 61:1-61:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.61,
  author =	{Gawrychowski, Pawel and Karczmarz, Adam},
  title =	{{Improved Bounds for Shortest Paths in Dense Distance Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{61:1--61:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.61},
  URN =		{urn:nbn:de:0030-drops-90654},
  doi =		{10.4230/LIPIcs.ICALP.2018.61},
  annote =	{Keywords: shortest paths, dense distance graph, planar graph, Monge matrix}
}
Document
Towards Unified Approximate Pattern Matching for Hamming and L_1 Distance

Authors: Pawel Gawrychowski and Przemyslaw Uznanski


Abstract
Computing the distance between a given pattern of length n and a text of length m is defined as calculating, for every m-substring of the text, the distance between the pattern and the substring. This naturally generalizes the standard notion of exact pattern matching to incorporate dissimilarity score. For both Hamming and L_{1} distance only relatively slow O~(n sqrt{m}) solutions are known for this generalization. This can be overcome by relaxing the question. For Hamming distance, the usual relaxation is to consider the k-bounded variant, where distances exceeding k are reported as infty, while for L_{1} distance asking for a (1 +/- epsilon)-approximation seems more natural. For k-bounded Hamming distance, Amir et al. [J. Algorithms 2004] showed an O~(n sqrt{k}) time algorithm, and Clifford et al. [SODA 2016] designed an O~((m+k^{2})* n/m) time solution. We provide a smooth time trade-off between these bounds by exhibiting an O~((m+k sqrt{m})* n/m) time algorithm. We complement the trade-off with a matching conditional lower bound, showing that a significantly faster combinatorial algorithm is not possible, unless the combinatorial matrix multiplication conjecture fails. We also exhibit a series of reductions that together allow us to achieve essentially the same complexity for k-bounded L_1 distance. Finally, for (1 +/- epsilon)-approximate L_1 distance, the running time of the best previously known algorithm of Lipsky and Porat [Algorithmica 2011] was O(epsilon^{-2} n). We improve this to O~(epsilon^{-1}n), thus essentially matching the complexity of the best known algorithm for (1 +/- epsilon)-approximate Hamming distance.

Cite as

Pawel Gawrychowski and Przemyslaw Uznanski. Towards Unified Approximate Pattern Matching for Hamming and L_1 Distance. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 62:1-62:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.62,
  author =	{Gawrychowski, Pawel and Uznanski, Przemyslaw},
  title =	{{Towards Unified Approximate Pattern Matching for Hamming and L\underline1 Distance}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{62:1--62:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.62},
  URN =		{urn:nbn:de:0030-drops-90669},
  doi =		{10.4230/LIPIcs.ICALP.2018.62},
  annote =	{Keywords: approximate pattern matching, conditional lower bounds, L\underline1 distance, Hamming distance}
}
Document
A Faster Construction of Greedy Consensus Trees

Authors: Pawel Gawrychowski, Gad M. Landau, Wing-Kin Sung, and Oren Weimann


Abstract
A consensus tree is a phylogenetic tree that captures the similarity between a set of conflicting phylogenetic trees. The problem of computing a consensus tree is a major step in phylogenetic tree reconstruction. It is also central for predicting a species tree from a set of gene trees, as indicated recently in [Nature 2013]. This paper focuses on two of the most well-known and widely used consensus tree methods: the greedy consensus tree and the frequency difference consensus tree. Given k conflicting trees each with n leaves, the previous fastest algorithms for these problems were O(k n^2) for the greedy consensus tree [J. ACM 2016] and O~(min{k n^2, k^2n}) for the frequency difference consensus tree [ACM TCBB 2016]. We improve these running times to O~(k n^{1.5}) and O~(k n) respectively.

Cite as

Pawel Gawrychowski, Gad M. Landau, Wing-Kin Sung, and Oren Weimann. A Faster Construction of Greedy Consensus Trees. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.63,
  author =	{Gawrychowski, Pawel and Landau, Gad M. and Sung, Wing-Kin and Weimann, Oren},
  title =	{{A Faster Construction of Greedy Consensus Trees}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.63},
  URN =		{urn:nbn:de:0030-drops-90676},
  doi =		{10.4230/LIPIcs.ICALP.2018.63},
  annote =	{Keywords: phylogenetic trees, greedy consensus trees, dynamic trees}
}
Document
A Faster FPTAS for #Knapsack

Authors: Pawel Gawrychowski, Liran Markin, and Oren Weimann


Abstract
Given a set W = {w_1,..., w_n} of non-negative integer weights and an integer C, the #Knapsack problem asks to count the number of distinct subsets of W whose total weight is at most C. In the more general integer version of the problem, the subsets are multisets. That is, we are also given a set {u_1,..., u_n} and we are allowed to take up to u_i items of weight w_i. We present a deterministic FPTAS for #Knapsack running in O(n^{2.5}epsilon^{-1.5}log(n epsilon^{-1})log (n epsilon)) time. The previous best deterministic algorithm [FOCS 2011] runs in O(n^3 epsilon^{-1} log(n epsilon^{-1})) time (see also [ESA 2014] for a logarithmic factor improvement). The previous best randomized algorithm [STOC 2003] runs in O(n^{2.5} sqrt{log (n epsilon^{-1})} + epsilon^{-2} n^2) time. Therefore, for the case of constant epsilon, we close the gap between the O~(n^{2.5}) randomized algorithm and the O~(n^3) deterministic algorithm. For the integer version with U = max_i {u_i}, we present a deterministic FPTAS running in O(n^{2.5}epsilon^{-1.5}log(n epsilon^{-1} log U)log (n epsilon) log^2 U) time. The previous best deterministic algorithm [TCS 2016] runs in O(n^3 epsilon^{-1}log(n epsilon^{-1} log U) log^2 U) time.

Cite as

Pawel Gawrychowski, Liran Markin, and Oren Weimann. A Faster FPTAS for #Knapsack. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 64:1-64:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ICALP.2018.64,
  author =	{Gawrychowski, Pawel and Markin, Liran and Weimann, Oren},
  title =	{{A Faster FPTAS for #Knapsack}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{64:1--64:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.64},
  URN =		{urn:nbn:de:0030-drops-90687},
  doi =		{10.4230/LIPIcs.ICALP.2018.64},
  annote =	{Keywords: knapsack, approximate counting, K-approximating sets and functions}
}
Document
Towards Optimal Approximate Streaming Pattern Matching by Matching Multiple Patterns in Multiple Streams

Authors: Shay Golan, Tsvi Kopelowitz, and Ely Porat


Abstract
Recently, there has been a growing focus in solving approximate pattern matching problems in the streaming model. Of particular interest are the pattern matching with k-mismatches (KMM) problem and the pattern matching with w-wildcards (PMWC) problem. Motivated by reductions from these problems in the streaming model to the dictionary matching problem, this paper focuses on designing algorithms for the dictionary matching problem in the multi-stream model where there are several independent streams of data (as opposed to just one in the streaming model), and the memory complexity of an algorithm is expressed using two quantities: (1) a read-only shared memory storage area which is shared among all the streams, and (2) local stream memory that each stream stores separately. In the dictionary matching problem in the multi-stream model the goal is to preprocess a dictionary D={P_1,P_2,...,P_d} of d=|D| patterns (strings with maximum length m over alphabet Sigma) into a data structure stored in shared memory, so that given multiple independent streaming texts (where characters arrive one at a time) the algorithm reports occurrences of patterns from D in each one of the texts as soon as they appear. We design two efficient algorithms for the dictionary matching problem in the multi-stream model. The first algorithm works when all the patterns in D have the same length m and costs O(d log m) words in shared memory, O(log m log d) words in stream memory, and O(log m) time per character. The second algorithm works for general D, but the time cost per character becomes O(log m+log d log log d). We also demonstrate the usefulness of our first algorithm in solving both the KMM problem and PMWC problem in the streaming model. In particular, we obtain the first almost optimal (up to poly-log factors) algorithm for the PMWC problem in the streaming model. We also design a new algorithm for the KMM problem in the streaming model that, up to poly-log factors, has the same bounds as the most recent results that use different techniques. Moreover, for most inputs, our algorithm for KMM is significantly faster on average.

Cite as

Shay Golan, Tsvi Kopelowitz, and Ely Porat. Towards Optimal Approximate Streaming Pattern Matching by Matching Multiple Patterns in Multiple Streams. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 65:1-65:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{golan_et_al:LIPIcs.ICALP.2018.65,
  author =	{Golan, Shay and Kopelowitz, Tsvi and Porat, Ely},
  title =	{{Towards Optimal Approximate Streaming Pattern Matching by Matching Multiple Patterns in Multiple Streams}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{65:1--65:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.65},
  URN =		{urn:nbn:de:0030-drops-90690},
  doi =		{10.4230/LIPIcs.ICALP.2018.65},
  annote =	{Keywords: Streaming approximate pattern matching, Dictionary matching}
}
Document
Gray Codes and Symmetric Chains

Authors: Petr Gregor, Sven Jäger, Torsten Mütze, Joe Sawada, and Kaja Wille


Abstract
We consider the problem of constructing a cyclic listing of all bitstrings of length 2n+1 with Hamming weights in the interval [n+1-l,n+l], where 1 <= l <= n+1, by flipping a single bit in each step. This is a far-ranging generalization of the well-known middle two levels problem (l=1). We provide a solution for the case l=2 and solve a relaxed version of the problem for general values of l, by constructing cycle factors for those instances. Our proof uses symmetric chain decompositions of the hypercube, a concept known from the theory of posets, and we present several new constructions of such decompositions. In particular, we construct four pairwise edge-disjoint symmetric chain decompositions of the n-dimensional hypercube for any n >= 12.

Cite as

Petr Gregor, Sven Jäger, Torsten Mütze, Joe Sawada, and Kaja Wille. Gray Codes and Symmetric Chains. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gregor_et_al:LIPIcs.ICALP.2018.66,
  author =	{Gregor, Petr and J\"{a}ger, Sven and M\"{u}tze, Torsten and Sawada, Joe and Wille, Kaja},
  title =	{{Gray Codes and Symmetric Chains}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.66},
  URN =		{urn:nbn:de:0030-drops-90703},
  doi =		{10.4230/LIPIcs.ICALP.2018.66},
  annote =	{Keywords: Gray code, Hamilton cycle, hypercube, poset, symmetric chain}
}
Document
An Improved Isomorphism Test for Bounded-Tree-Width Graphs

Authors: Martin Grohe, Daniel Neuen, Pascal Schweitzer, and Daniel Wiebking


Abstract
We give a new fpt algorithm testing isomorphism of n-vertex graphs of tree width k in time 2^{k polylog(k)} poly n, improving the fpt algorithm due to Lokshtanov, Pilipczuk, Pilipczuk, and Saurabh (FOCS 2014), which runs in time 2^{O(k^5 log k)}poly n. Based on an improved version of the isomorphism-invariant graph decomposition technique introduced by Lokshtanov et al., we prove restrictions on the structure of the automorphism groups of graphs of tree width k. Our algorithm then makes heavy use of the group theoretic techniques introduced by Luks (JCSS 1982) in his isomorphism test for bounded degree graphs and Babai (STOC 2016) in his quasipolynomial isomorphism test. In fact, we even use Babai's algorithm as a black box in one place. We give a second algorithm which, at the price of a slightly worse run time 2^{O(k^2 log k)}poly n, avoids the use of Babai's algorithm and, more importantly, has the additional benefit that it can also be used as a canonization algorithm.

Cite as

Martin Grohe, Daniel Neuen, Pascal Schweitzer, and Daniel Wiebking. An Improved Isomorphism Test for Bounded-Tree-Width Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grohe_et_al:LIPIcs.ICALP.2018.67,
  author =	{Grohe, Martin and Neuen, Daniel and Schweitzer, Pascal and Wiebking, Daniel},
  title =	{{An Improved Isomorphism Test for Bounded-Tree-Width Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.67},
  URN =		{urn:nbn:de:0030-drops-90714},
  doi =		{10.4230/LIPIcs.ICALP.2018.67},
  annote =	{Keywords: graph isomorphism, graph canonization, tree width, decompositions}
}
Document
A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability

Authors: Heng Guo and Mark Jerrum


Abstract
We give a fully polynomial-time randomized approximation scheme (FPRAS) for the all-terminal network reliability problem, which is to determine the probability that, in a undirected graph, assuming each edge fails independently, the remaining graph is still connected. Our main contribution is to confirm a conjecture by Gorodezky and Pak (Random Struct. Algorithms, 2014), that the expected running time of the "cluster-popping" algorithm in bi-directed graphs is bounded by a polynomial in the size of the input.

Cite as

Heng Guo and Mark Jerrum. A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 68:1-68:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.ICALP.2018.68,
  author =	{Guo, Heng and Jerrum, Mark},
  title =	{{A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{68:1--68:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.68},
  URN =		{urn:nbn:de:0030-drops-90727},
  doi =		{10.4230/LIPIcs.ICALP.2018.68},
  annote =	{Keywords: Approximate counting, Network Reliability, Sampling, Markov chains}
}
Document
Perfect Simulation of the Hard Disks Model by Partial Rejection Sampling

Authors: Heng Guo and Mark Jerrum


Abstract
We present a perfect simulation of the hard disks model via the partial rejection sampling method. Provided the density of disks is not too high, the method produces exact samples in O(log n) rounds, where n is the expected number of disks. The method extends easily to the hard spheres model in d>2 dimensions. In order to apply the partial rejection method to this continuous setting, we provide an alternative perspective of its correctness and run-time analysis that is valid for general state spaces.

Cite as

Heng Guo and Mark Jerrum. Perfect Simulation of the Hard Disks Model by Partial Rejection Sampling. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 69:1-69:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.ICALP.2018.69,
  author =	{Guo, Heng and Jerrum, Mark},
  title =	{{Perfect Simulation of the Hard Disks Model by Partial Rejection Sampling}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{69:1--69:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.69},
  URN =		{urn:nbn:de:0030-drops-90739},
  doi =		{10.4230/LIPIcs.ICALP.2018.69},
  annote =	{Keywords: Hard disks model, Sampling, Markov chains}
}
Document
Non-Preemptive Flow-Time Minimization via Rejections

Authors: Anupam Gupta, Amit Kumar, and Jason Li


Abstract
We consider the online problem of minimizing weighted flow-time on unrelated machines. Although much is known about this problem in the resource-augmentation setting, these results assume that jobs can be preempted. We give the first constant-competitive algorithm for the non-preemptive setting in the rejection model. In this rejection model, we are allowed to reject an epsilon-fraction of the total weight of jobs, and compare the resulting flow-time to that of the offline optimum which is required to schedule all jobs. This is arguably the weakest assumption in which such a result is known for weighted flow-time on unrelated machines. While our algorithms are simple, we need a delicate argument to bound the flow-time. Indeed, we use the dual-fitting framework, with considerable more machinery to certify that the cost of our algorithm is within a constant of the optimum while only a small fraction of the jobs are rejected.

Cite as

Anupam Gupta, Amit Kumar, and Jason Li. Non-Preemptive Flow-Time Minimization via Rejections. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 70:1-70:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2018.70,
  author =	{Gupta, Anupam and Kumar, Amit and Li, Jason},
  title =	{{Non-Preemptive Flow-Time Minimization via Rejections}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{70:1--70:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.70},
  URN =		{urn:nbn:de:0030-drops-90740},
  doi =		{10.4230/LIPIcs.ICALP.2018.70},
  annote =	{Keywords: Scheduling, Rejection, Unrelated Machines, Non-Preemptive}
}
Document
Maximizing Profit with Convex Costs in the Random-order Model

Authors: Anupam Gupta, Ruta Mehta, and Marco Molinaro


Abstract
Suppose a set of requests arrives online: each request gives some value v_i if accepted, but requires using some amount of each of d resources. Our cost is a convex function of the vector of total utilization of these d resources. Which requests should be accept to maximize our profit, i.e., the sum of values of the accepted demands, minus the convex cost? We consider this problem in the random-order a.k.a. secretary model, and show an O(d)-competitive algorithm for the case where the convex cost function is also supermodular. If the set of accepted demands must also be independent in a given matroid, we give an O(d^3 alpha)-competitive algorithm for the supermodular case, and an improved O(d^2 alpha) if the convex cost function is also separable. Here alpha is the competitive ratio of the best algorithm for the submodular secretary problem. These extend and improve previous results known for this problem. Our techniques are simple but use powerful ideas from convex duality, which give clean interpretations of existing work, and allow us to give the extensions and improvements.

Cite as

Anupam Gupta, Ruta Mehta, and Marco Molinaro. Maximizing Profit with Convex Costs in the Random-order Model. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 71:1-71:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2018.71,
  author =	{Gupta, Anupam and Mehta, Ruta and Molinaro, Marco},
  title =	{{Maximizing Profit with Convex Costs in the Random-order Model}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{71:1--71:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.71},
  URN =		{urn:nbn:de:0030-drops-90751},
  doi =		{10.4230/LIPIcs.ICALP.2018.71},
  annote =	{Keywords: Online algorithms, secretary problem, random order, convex duality}
}
Document
Generic Single Edge Fault Tolerant Exact Distance Oracle

Authors: Manoj Gupta and Aditi Singh


Abstract
Given an undirected unweighted graph G and a source set S of |S| = sigma sources, we want to build a data structure which can process the following query Q(s,t,e): find the shortest distance from s to t avoiding an edge e, where s in S and t in V. When sigma=n, Demetrescu, Thorup, Chowdhury and Ramachandran (SIAM Journal of Computing, 2008) designed an algorithm with O~(n^2) space and O(1) query time. A natural open question is to generalize this result to any number of sources. Recently, Bil{ò} et. al. (STACS 2018) designed a data-structure of size O~(sigma^{1/2}n^{3/2}) with the query time of O(sqrt{n sigma}) for the above problem. We improve their result by designing a data-structure of size O~(sigma^{1/2} n^{3/2}) that can answer queries in O~(1) time. In a related problem of finding fault tolerant subgraph, Parter and Peleg (ESA 2013) showed that if detours of replacement paths ending at a vertex t are disjoint, then the number of such paths is O(sqrt{n sigma}). This eventually gives a bound of O(n sqrt{n sigma}) = O(sigma^{1/2}n^{3/2}) for their problem. Disjointness of detours is a very crucial property used in the above result. We show a similar result for a subset of replacement path which may not be disjoint. This result is the crux of our paper and may be of independent interest.

Cite as

Manoj Gupta and Aditi Singh. Generic Single Edge Fault Tolerant Exact Distance Oracle. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 72:1-72:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2018.72,
  author =	{Gupta, Manoj and Singh, Aditi},
  title =	{{Generic Single Edge Fault Tolerant Exact Distance Oracle}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{72:1--72:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.72},
  URN =		{urn:nbn:de:0030-drops-90766},
  doi =		{10.4230/LIPIcs.ICALP.2018.72},
  annote =	{Keywords: Fault Tolerant Algorithms, Graph Algorithms, Distance Oracles, Data-Structures}
}
Document
An Exponential Separation Between MA and AM Proofs of Proximity

Authors: Tom Gur, Yang P. Liu, and Ron D. Rothblum


Abstract
Interactive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as MA-proofs of Proximity (MAP), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as AM-proofs of Proximity (AMP), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both MAPs and AMPs is the total number of bits that the verifier observes - namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of MAPs and AMPs. Specifically, we exhibit an explicit and natural property Pi that admits an AMP with complexity O(log n), whereas any MAP for Pi has complexity Omega~(n^{1/4}), where n denotes the length of the input in bits. Our MAP lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting we show that AMPs can only be quadratically stronger than MAPs. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity.

Cite as

Tom Gur, Yang P. Liu, and Ron D. Rothblum. An Exponential Separation Between MA and AM Proofs of Proximity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 73:1-73:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gur_et_al:LIPIcs.ICALP.2018.73,
  author =	{Gur, Tom and Liu, Yang P. and Rothblum, Ron D.},
  title =	{{An Exponential Separation Between MA and AM Proofs of Proximity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{73:1--73:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.73},
  URN =		{urn:nbn:de:0030-drops-90772},
  doi =		{10.4230/LIPIcs.ICALP.2018.73},
  annote =	{Keywords: Property testing, Probabilistic proof systems, Proofs of proximity}
}
Document
Isolating a Vertex via Lattices: Polytopes with Totally Unimodular Faces

Authors: Rohit Gurjar, Thomas Thierauf, and Nisheeth K. Vishnoi


Abstract
We present a geometric approach towards derandomizing the {Isolation Lemma} by Mulmuley, Vazirani, and Vazirani. In particular, our approach produces a quasi-polynomial family of weights, where each weight is an integer and quasi-polynomially bounded, that can isolate a vertex in any 0/1 polytope for which each face lies in an affine space defined by a totally unimodular matrix. This includes the polytopes given by totally unimodular constraints and generalizes the recent derandomization of the Isolation Lemma for {bipartite perfect matching} and {matroid intersection}. We prove our result by associating a {lattice} to each face of the polytope and showing that if there is a totally unimodular kernel matrix for this lattice, then the number of vectors of length within 3/2 of the shortest vector in it is polynomially bounded. The proof of this latter geometric fact is combinatorial and follows from a polynomial bound on the number of circuits of size within 3/2 of the shortest circuit in a regular matroid. This is the technical core of the paper and relies on a variant of Seymour's decomposition theorem for regular matroids. It generalizes an influential result by Karger on the number of minimum cuts in a graph to regular matroids.

Cite as

Rohit Gurjar, Thomas Thierauf, and Nisheeth K. Vishnoi. Isolating a Vertex via Lattices: Polytopes with Totally Unimodular Faces. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gurjar_et_al:LIPIcs.ICALP.2018.74,
  author =	{Gurjar, Rohit and Thierauf, Thomas and Vishnoi, Nisheeth K.},
  title =	{{Isolating a Vertex via Lattices: Polytopes with Totally Unimodular Faces}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{74:1--74:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.74},
  URN =		{urn:nbn:de:0030-drops-90782},
  doi =		{10.4230/LIPIcs.ICALP.2018.74},
  annote =	{Keywords: Derandomization, Isolation Lemma, Total unimodularity, Near-shortest vectors in Lattices, Regular matroids}
}
Document
Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions

Authors: Bernhard Haeupler, Amirbehshad Shahrasbi, and Ellen Vitercik


Abstract
We present many new results related to reliable (interactive) communication over insertion-deletion channels. Synchronization errors, such as insertions and deletions, strictly generalize the usual symbol corruption errors and are much harder to protect against. We show how to hide the complications of synchronization errors in many applications by introducing very general channel simulations which efficiently transform an insertion-deletion channel into a regular symbol corruption channel with an error rate larger by a constant factor and a slightly smaller alphabet. We utilize and generalize synchronization string based methods which were recently introduced as a tool to design essentially optimal error correcting codes for insertion-deletion channels. Our channel simulations depend on the fact that, at the cost of increasing the error rate by a constant factor, synchronization strings can be decoded in a streaming manner that preserves linearity of time. Interestingly, we provide a lower bound showing that this constant factor cannot be improved to 1+epsilon, in contrast to what is achievable for error correcting codes. Our channel simulations drastically and cleanly generalize the applicability of synchronization strings. We provide new interactive coding schemes which simulate any interactive two-party protocol over an insertion-deletion channel. Our results improve over the interactive coding schemes of Braverman et al. [TransInf `17] and Sherstov and Wu [FOCS `17] which achieve a small constant rate and require exponential time computations with respect to computational and communication complexities. We provide the first computationally efficient interactive coding schemes for synchronization errors, the first coding scheme with a rate approaching one for small noise rates, and also the first coding scheme that works over arbitrarily small alphabet sizes. We also show tight connections between synchronization strings and edit-distance tree codes which allow us to transfer results from tree codes directly to edit-distance tree codes. Finally, using on our channel simulations, we provide an explicit low-rate binary insertion-deletion code that improves over the state-of-the-art codes by Guruswami and Wang [TransInf `17] in terms of rate-distance trade-off.

Cite as

Bernhard Haeupler, Amirbehshad Shahrasbi, and Ellen Vitercik. Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2018.75,
  author =	{Haeupler, Bernhard and Shahrasbi, Amirbehshad and Vitercik, Ellen},
  title =	{{Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.75},
  URN =		{urn:nbn:de:0030-drops-90794},
  doi =		{10.4230/LIPIcs.ICALP.2018.75},
  annote =	{Keywords: Synchronization Strings, Channel Simulation, Coding for Interactive Communication}
}
Document
Synchronization Strings: List Decoding for Insertions and Deletions

Authors: Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan


Abstract
We study codes that are list-decodable under insertions and deletions ("insdel codes"). Specifically, we consider the setting where, given a codeword x of length n over some finite alphabet Sigma of size q, delta * n codeword symbols may be adversarially deleted and gamma * n symbols may be adversarially inserted to yield a corrupted word w. A code is said to be list-decodable if there is an (efficient) algorithm that, given w, reports a small list of codewords that include the original codeword x. Given delta and gamma we study what is the rate R for which there exists a constant q and list size L such that there exist codes of rate R correcting delta-fraction insertions and gamma-fraction deletions while reporting lists of size at most L. Using the concept of synchronization strings, introduced by the first two authors [Proc. STOC 2017], we show some surprising results. We show that for every 0 <= delta < 1, every 0 <= gamma < infty and every epsilon > 0 there exist codes of rate 1 - delta - epsilon and constant alphabet (so q = O_{delta,gamma,epsilon}(1)) and sub-logarithmic list sizes. Furthermore, our codes are accompanied by efficient (polynomial time) decoding algorithms. We stress that the fraction of insertions can be arbitrarily large (more than 100%), and the rate is independent of this parameter. We also prove several tight bounds on the parameters of list-decodable insdel codes. In particular, we show that the alphabet size of insdel codes needs to be exponentially large in epsilon^{-1}, where epsilon is the gap to capacity above. Our result even applies to settings where the unique-decoding capacity equals the list-decoding capacity and when it does so, it shows that the alphabet size needs to be exponentially large in the gap to capacity. This is sharp contrast to the Hamming error model where alphabet size polynomial in epsilon^{-1} suffices for unique decoding. This lower bound also shows that the exponential dependence on the alphabet size in previous works that constructed insdel codes is actually necessary! Our result sheds light on the remarkable asymmetry between the impact of insertions and deletions from the point of view of error-correction: Whereas deletions cost in the rate of the code, insertion costs are borne by the adversary and not the code! Our results also highlight the dominance of the model of insertions and deletions over the Hamming model: A Hamming error is equal to one insertion and one deletion (at the same location). Thus the effect of delta-fraction Hamming errors can be simulated by delta-fraction of deletions and delta-fraction of insertions - but insdel codes can deal with much more insertions without loss in rate (though at the price of higher alphabet size).

Cite as

Bernhard Haeupler, Amirbehshad Shahrasbi, and Madhu Sudan. Synchronization Strings: List Decoding for Insertions and Deletions. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ICALP.2018.76,
  author =	{Haeupler, Bernhard and Shahrasbi, Amirbehshad and Sudan, Madhu},
  title =	{{Synchronization Strings: List Decoding for Insertions and Deletions}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.76},
  URN =		{urn:nbn:de:0030-drops-90807},
  doi =		{10.4230/LIPIcs.ICALP.2018.76},
  annote =	{Keywords: List Decoding, Insertions and Deletions, Synchronization Strings}
}
Document
Approximate Sparse Linear Regression

Authors: Sariel Har-Peled, Piotr Indyk, and Sepideh Mahabadi


Abstract
In the Sparse Linear Regression (SLR) problem, given a d x n matrix M and a d-dimensional query q, the goal is to compute a k-sparse n-dimensional vector tau such that the error ||M tau - q|| is minimized. This problem is equivalent to the following geometric problem: given a set P of n points and a query point q in d dimensions, find the closest k-dimensional subspace to q, that is spanned by a subset of k points in P. In this paper, we present data-structures/algorithms and conditional lower bounds for several variants of this problem (such as finding the closest induced k dimensional flat/simplex instead of a subspace). In particular, we present approximation algorithms for the online variants of the above problems with query time O~(n^{k-1}), which are of interest in the "low sparsity regime" where k is small, e.g., 2 or 3. For k=d, this matches, up to polylogarithmic factors, the lower bound that relies on the affinely degenerate conjecture (i.e., deciding if n points in R^d contains d+1 points contained in a hyperplane takes Omega(n^d) time). Moreover, our algorithms involve formulating and solving several geometric subproblems, which we believe to be of independent interest.

Cite as

Sariel Har-Peled, Piotr Indyk, and Sepideh Mahabadi. Approximate Sparse Linear Regression. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.ICALP.2018.77,
  author =	{Har-Peled, Sariel and Indyk, Piotr and Mahabadi, Sepideh},
  title =	{{Approximate Sparse Linear Regression}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.77},
  URN =		{urn:nbn:de:0030-drops-90816},
  doi =		{10.4230/LIPIcs.ICALP.2018.77},
  annote =	{Keywords: Sparse Linear Regression, Approximate Nearest Neighbor, Sparse Recovery, Nearest Induced Flat, Nearest Subspace Search}
}
Document
A Polynomial Time Algorithm to Compute Geodesics in CAT(0) Cubical Complexes

Authors: Koyo Hayashi


Abstract
This paper presents the first polynomial time algorithm to compute geodesics in a CAT(0) cubical complex in general dimension. The algorithm is a simple iterative method to update breakpoints of a path joining two points using Miller, Owen and Provan's algorithm (Adv. in Appl. Math, 2015) as a subroutine. Our algorithm is applicable to any CAT(0) space in which geodesics between two close points can be computed, not limited to CAT(0) cubical complexes.

Cite as

Koyo Hayashi. A Polynomial Time Algorithm to Compute Geodesics in CAT(0) Cubical Complexes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 78:1-78:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hayashi:LIPIcs.ICALP.2018.78,
  author =	{Hayashi, Koyo},
  title =	{{A Polynomial Time Algorithm to Compute Geodesics in CAT(0) Cubical Complexes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{78:1--78:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.78},
  URN =		{urn:nbn:de:0030-drops-90822},
  doi =		{10.4230/LIPIcs.ICALP.2018.78},
  annote =	{Keywords: Geodesic, CAT(0) Space, Cubical Complex}
}
Document
Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals

Authors: Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang


Abstract
We introduce a weighted version of the ranking algorithm by Karp et al. (STOC 1990), and prove a competitive ratio of 0.6534 for the vertex-weighted online bipartite matching problem when online vertices arrive in random order. Our result shows that random arrivals help beating the 1-1/e barrier even in the vertex-weighted case. We build on the randomized primal-dual framework by Devanur et al. (SODA 2013) and design a two dimensional gain sharing function, which depends not only on the rank of the offline vertex, but also on the arrival time of the online vertex. To our knowledge, this is the first competitive ratio strictly larger than 1-1/e for an online bipartite matching problem achieved under the randomized primal-dual framework. Our algorithm has a natural interpretation that offline vertices offer a larger portion of their weights to the online vertices as time goes by, and each online vertex matches the neighbor with the highest offer at its arrival.

Cite as

Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu, and Yuhao Zhang. Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 79:1-79:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ICALP.2018.79,
  author =	{Huang, Zhiyi and Tang, Zhihao Gavin and Wu, Xiaowei and Zhang, Yuhao},
  title =	{{Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{79:1--79:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.79},
  URN =		{urn:nbn:de:0030-drops-90830},
  doi =		{10.4230/LIPIcs.ICALP.2018.79},
  annote =	{Keywords: Vertex Weighted, Online Bipartite Matching, Randomized Primal-Dual}
}
Document
Finding Branch-Decompositions of Matroids, Hypergraphs, and More

Authors: Jisu Jeong, Eun Jung Kim, and Sang-il Oum


Abstract
Given n subspaces of a finite-dimensional vector space over a fixed finite field F, we wish to find a "branch-decomposition" of these subspaces of width at most k, that is a subcubic tree T with n leaves mapped bijectively to the subspaces such that for every edge e of T, the sum of subspaces associated to the leaves in one component of T-e and the sum of subspaces associated to the leaves in the other component have the intersection of dimension at most k. This problem includes the problems of computing branch-width of F-represented matroids, rank-width of graphs, branch-width of hypergraphs, and carving-width of graphs. We present a fixed-parameter algorithm to construct such a branch-decomposition of width at most k, if it exists, for input subspaces of a finite-dimensional vector space over F. Our algorithm is analogous to the algorithm of Bodlaender and Kloks (1996) on tree-width of graphs. To extend their framework to branch-decompositions of vector spaces, we developed highly generic tools for branch-decompositions on vector spaces. For this problem, a fixed-parameter algorithm was known due to Hlinený and Oum (2008). But their method is highly indirect. Their algorithm uses the non-trivial fact by Geelen et al. (2003) that the number of forbidden minors is finite and uses the algorithm of Hlinený (2006) on checking monadic second-order formulas on F-represented matroids of small branch-width. Our result does not depend on such a fact and is completely self-contained, and yet matches their asymptotic running time for each fixed k.

Cite as

Jisu Jeong, Eun Jung Kim, and Sang-il Oum. Finding Branch-Decompositions of Matroids, Hypergraphs, and More. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jeong_et_al:LIPIcs.ICALP.2018.80,
  author =	{Jeong, Jisu and Kim, Eun Jung and Oum, Sang-il},
  title =	{{Finding Branch-Decompositions of Matroids, Hypergraphs, and More}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.80},
  URN =		{urn:nbn:de:0030-drops-90849},
  doi =		{10.4230/LIPIcs.ICALP.2018.80},
  annote =	{Keywords: branch-width, rank-width, carving-width, fixed-parameter tractability}
}
Document
Optimally Sorting Evolving Data

Authors: Juan Jose Besa, William E. Devanny, David Eppstein, Michael T. Goodrich, and Timothy Johnson


Abstract
We give optimal sorting algorithms in the evolving data framework, where an algorithm's input data is changing while the algorithm is executing. In this framework, instead of producing a final output, an algorithm attempts to maintain an output close to the correct output for the current state of the data, repeatedly updating its best estimate of a correct output over time. We show that a simple repeated insertion-sort algorithm can maintain an O(n) Kendall tau distance, with high probability, between a maintained list and an underlying total order of n items in an evolving data model where each comparison is followed by a swap between a random consecutive pair of items in the underlying total order. This result is asymptotically optimal, since there is an Omega(n) lower bound for Kendall tau distance for this problem. Our result closes the gap between this lower bound and the previous best algorithm for this problem, which maintains a Kendall tau distance of O(n log log n) with high probability. It also confirms previous experimental results that suggested that insertion sort tends to perform better than quicksort in practice.

Cite as

Juan Jose Besa, William E. Devanny, David Eppstein, Michael T. Goodrich, and Timothy Johnson. Optimally Sorting Evolving Data. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 81:1-81:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{besa_et_al:LIPIcs.ICALP.2018.81,
  author =	{Besa, Juan Jose and Devanny, William E. and Eppstein, David and Goodrich, Michael T. and Johnson, Timothy},
  title =	{{Optimally Sorting Evolving Data}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{81:1--81:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.81},
  URN =		{urn:nbn:de:0030-drops-90858},
  doi =		{10.4230/LIPIcs.ICALP.2018.81},
  annote =	{Keywords: Sorting, Evolving data, Insertion sort}
}
Document
Generalized Comparison Trees for Point-Location Problems

Authors: Daniel M. Kane, Shachar Lovett, and Shay Moran


Abstract
Let H be an arbitrary family of hyper-planes in d-dimensions. We show that the point-location problem for H can be solved by a linear decision tree that only uses a special type of queries called generalized comparison queries. These queries correspond to hyperplanes that can be written as a linear combination of two hyperplanes from H; in particular, if all hyperplanes in H are k-sparse then generalized comparisons are 2k-sparse. The depth of the obtained linear decision tree is polynomial in d and logarithmic in |H|, which is comparable to previous results in the literature that use general linear queries. This extends the study of comparison trees from a previous work by the authors [Kane {et al.}, FOCS 2017]. The main benefit is that using generalized comparison queries allows to overcome limitations that apply for the more restricted type of comparison queries. Our analysis combines a seminal result of Forster regarding sets in isotropic position [Forster, JCSS 2002], the margin-based inference dimension analysis for comparison queries from [Kane {et al.}, FOCS 2017], and compactness arguments.

Cite as

Daniel M. Kane, Shachar Lovett, and Shay Moran. Generalized Comparison Trees for Point-Location Problems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 82:1-82:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kane_et_al:LIPIcs.ICALP.2018.82,
  author =	{Kane, Daniel M. and Lovett, Shachar and Moran, Shay},
  title =	{{Generalized Comparison Trees for Point-Location Problems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{82:1--82:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.82},
  URN =		{urn:nbn:de:0030-drops-90862},
  doi =		{10.4230/LIPIcs.ICALP.2018.82},
  annote =	{Keywords: linear decision trees, comparison queries, point location problems}
}
Document
Stabilizing Weighted Graphs

Authors: Zhuan Khye Koh and Laura Sanità


Abstract
An edge-weighted graph G=(V,E) is called stable if the value of a maximum-weight matching equals the value of a maximum-weight fractional matching. Stable graphs play an important role in some interesting game theory problems, such as network bargaining games and cooperative matching games, because they characterize instances which admit stable outcomes. Motivated by this, in the last few years many researchers have investigated the algorithmic problem of turning a given graph into a stable one, via edge- and vertex-removal operations. However, all the algorithmic results developed in the literature so far only hold for unweighted instances, i.e., assuming unit weights on the edges of G. We give the first polynomial-time algorithm to find a minimum cardinality subset of vertices whose removal from G yields a stable graph, for any weighted graph G. The algorithm is combinatorial and exploits new structural properties of basic fractional matchings, which are of independent interest. In particular, one of the main ingredients of our result is the development of a polynomial-time algorithm to compute a basic maximum-weight fractional matching with minimum number of odd cycles in its support. This generalizes a fundamental and classical result on unweighted matchings given by Balas more than 30 years ago, which we expect to prove useful beyond this particular application. In contrast, we show that the problem of finding a minimum cardinality subset of edges whose removal from a weighted graph G yields a stable graph, does not admit any constant-factor approximation algorithm, unless P=NP. In this setting, we develop an O(Delta)-approximation algorithm for the problem, where Delta is the maximum degree of a node in G.

Cite as

Zhuan Khye Koh and Laura Sanità. Stabilizing Weighted Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 83:1-83:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{koh_et_al:LIPIcs.ICALP.2018.83,
  author =	{Koh, Zhuan Khye and Sanit\`{a}, Laura},
  title =	{{Stabilizing Weighted Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{83:1--83:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.83},
  URN =		{urn:nbn:de:0030-drops-90877},
  doi =		{10.4230/LIPIcs.ICALP.2018.83},
  annote =	{Keywords: combinatorial optimization, network bargaining, cooperative game}
}
Document
Spectrally Robust Graph Isomorphism

Authors: Alexandra Kolla, Ioannis Koutis, Vivek Madan, and Ali Kemal Sinop


Abstract
We initiate the study of spectral generalizations of the graph isomorphism problem. b) The Spectral Graph Dominance (SGD) problem: On input of two graphs G and H does there exist a permutation pi such that G preceq pi(H)? c) The Spectrally Robust Graph Isomorphism (kappa-SRGI) problem: On input of two graphs G and H, find the smallest number kappa over all permutations pi such that pi(H) preceq G preceq kappa c pi(H) for some c. SRGI is a natural formulation of the network alignment problem that has various applications, most notably in computational biology. G preceq c H means that for all vectors x we have x^T L_G x <= c x^T L_H x, where L_G is the Laplacian G. We prove NP-hardness for SGD. We also present a kappa^3-approximation algorithm for SRGI for the case when both G and H are bounded-degree trees. The algorithm runs in polynomial time when kappa is a constant.

Cite as

Alexandra Kolla, Ioannis Koutis, Vivek Madan, and Ali Kemal Sinop. Spectrally Robust Graph Isomorphism. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 84:1-84:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kolla_et_al:LIPIcs.ICALP.2018.84,
  author =	{Kolla, Alexandra and Koutis, Ioannis and Madan, Vivek and Sinop, Ali Kemal},
  title =	{{Spectrally Robust Graph Isomorphism}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{84:1--84:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.84},
  URN =		{urn:nbn:de:0030-drops-90887},
  doi =		{10.4230/LIPIcs.ICALP.2018.84},
  annote =	{Keywords: Network Alignment, Graph Isomorphism, Graph Similarity}
}
Document
A Parameterized Strongly Polynomial Algorithm for Block Structured Integer Programs

Authors: Martin Koutecký, Asaf Levin, and Shmuel Onn


Abstract
The theory of n-fold integer programming has been recently emerging as an important tool in parameterized complexity. The input to an n-fold integer program (IP) consists of parameter A, dimension n, and numerical data of binary encoding length L. It was known for some time that such programs can be solved in polynomial time using O(n^{g(A)}L) arithmetic operations where g is an exponential function of the parameter. In 2013 it was shown that it can be solved in fixed-parameter tractable time using O(f(A)n^3L) arithmetic operations for a single-exponential function f. This, and a faster algorithm for a special case of combinatorial n-fold IP, have led to several very recent breakthroughs in the parameterized complexity of scheduling, stringology, and computational social choice. In 2015 it was shown that it can be solved in strongly polynomial time using O(n^{g(A)}) arithmetic operations. Here we establish a result which subsumes all three of the above results by showing that n-fold IP can be solved in strongly polynomial fixed-parameter tractable time using O(f(A)n^6 log n) arithmetic operations. In fact, our results are much more general, briefly outlined as follows. - There is a strongly polynomial algorithm for integer linear programming (ILP) whenever a so-called Graver-best oracle is realizable for it. - Graver-best oracles for the large classes of multi-stage stochastic and tree-fold ILPs can be realized in fixed-parameter tractable time. Together with the previous oracle algorithm, this newly shows two large classes of ILP to be strongly polynomial; in contrast, only few classes of ILP were previously known to be strongly polynomial. - We show that ILP is fixed-parameter tractable parameterized by the largest coefficient |A |_infty and the primal or dual treedepth of A, and that this parameterization cannot be relaxed, signifying substantial progress in understanding the parameterized complexity of ILP.

Cite as

Martin Koutecký, Asaf Levin, and Shmuel Onn. A Parameterized Strongly Polynomial Algorithm for Block Structured Integer Programs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 85:1-85:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{koutecky_et_al:LIPIcs.ICALP.2018.85,
  author =	{Kouteck\'{y}, Martin and Levin, Asaf and Onn, Shmuel},
  title =	{{A Parameterized Strongly Polynomial Algorithm for Block Structured Integer Programs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{85:1--85:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.85},
  URN =		{urn:nbn:de:0030-drops-90898},
  doi =		{10.4230/LIPIcs.ICALP.2018.85},
  annote =	{Keywords: integer programming, parameterized complexity, Graver basis, n-fold integer programming}
}
Document
Finer Tight Bounds for Coloring on Clique-Width

Authors: Michael Lampis


Abstract
We revisit the complexity of the classical k-Coloring problem parameterized by clique-width. This is a very well-studied problem that becomes highly intractable when the number of colors k is large. However, much less is known on its complexity for small, concrete values of k. In this paper, we completely determine the complexity of k-Coloring parameterized by clique-width for any fixed k, under the SETH. Specifically, we show that for all k >= 3,epsilon>0, k-Coloring cannot be solved in time O^*((2^k-2-epsilon)^{cw}), and give an algorithm running in time O^*((2^k-2)^{cw}). Thus, if the SETH is true, 2^k-2 is the "correct" base of the exponent for every k. Along the way, we also consider the complexity of k-Coloring parameterized by the related parameter modular treewidth (mtw). In this case we show that the "correct" running time, under the SETH, is O^*({k choose floor[k/2]}^{mtw}). If we base our results on a weaker assumption (the ETH), they imply that k-Coloring cannot be solved in time n^{o(cw)}, even on instances with O(log n) colors.

Cite as

Michael Lampis. Finer Tight Bounds for Coloring on Clique-Width. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 86:1-86:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lampis:LIPIcs.ICALP.2018.86,
  author =	{Lampis, Michael},
  title =	{{Finer Tight Bounds for Coloring on Clique-Width}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{86:1--86:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.86},
  URN =		{urn:nbn:de:0030-drops-90903},
  doi =		{10.4230/LIPIcs.ICALP.2018.86},
  annote =	{Keywords: Clique-width, SETH, Coloring}
}
Document
A Centralized Local Algorithm for the Sparse Spanning Graph Problem

Authors: Christoph Lenzen and Reut Levi


Abstract
Constructing a sparse spanning subgraph is a fundamental primitive in graph theory. In this paper, we study this problem in the Centralized Local model, where the goal is to decide whether an edge is part of the spanning subgraph by examining only a small part of the input; yet, answers must be globally consistent and independent of prior queries. Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. Therefore, we settle for a spanning subgraph containing at most (1+epsilon)n edges (where n is the number of vertices and epsilon is a given approximation/sparsity parameter). We achieve a query complexity of O~(poly(Delta/epsilon)n^{2/3}), where Delta is the maximum degree of the input graph. Our algorithm is the first to do so on arbitrary bounded degree graphs. Moreover, we achieve the additional property that our algorithm outputs a spanning subgraph of bounded stretch i.e., distances are approximately preserved. With high probability, for each deleted edge there is a path of O(log n * (Delta+log n)/epsilon) hops in the output that connects its endpoints.

Cite as

Christoph Lenzen and Reut Levi. A Centralized Local Algorithm for the Sparse Spanning Graph Problem. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 87:1-87:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lenzen_et_al:LIPIcs.ICALP.2018.87,
  author =	{Lenzen, Christoph and Levi, Reut},
  title =	{{A Centralized Local Algorithm for the Sparse Spanning Graph Problem}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{87:1--87:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.87},
  URN =		{urn:nbn:de:0030-drops-90919},
  doi =		{10.4230/LIPIcs.ICALP.2018.87},
  annote =	{Keywords: local, spanning graph, sparse}
}
Document
Chain, Generalization of Covering Code, and Deterministic Algorithm for k-SAT

Authors: Sixue Liu


Abstract
We present the current fastest deterministic algorithm for k-SAT, improving the upper bound (2-2/k)^{n + o(n)} due to Moser and Scheder in STOC 2011. The algorithm combines a branching algorithm with the derandomized local search, whose analysis relies on a special sequence of clauses called chain, and a generalization of covering code based on linear programming. We also provide a more intelligent branching algorithm for 3-SAT to establish the upper bound 1.32793^n, improved from 1.3303^n.

Cite as

Sixue Liu. Chain, Generalization of Covering Code, and Deterministic Algorithm for k-SAT. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 88:1-88:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{liu:LIPIcs.ICALP.2018.88,
  author =	{Liu, Sixue},
  title =	{{Chain, Generalization of Covering Code, and Deterministic Algorithm for k-SAT}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{88:1--88:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.88},
  URN =		{urn:nbn:de:0030-drops-90925},
  doi =		{10.4230/LIPIcs.ICALP.2018.88},
  annote =	{Keywords: Satisfiability, derandomization, local search}
}
Document
Stable-Matching Voronoi Diagrams: Combinatorial Complexity and Algorithms

Authors: Gill Barequet, David Eppstein, Michael T. Goodrich, and Nil Mamano


Abstract
We study algorithms and combinatorial complexity bounds for stable-matching Voronoi diagrams, where a set, S, of n point sites in the plane determines a stable matching between the points in R^2 and the sites in S such that (i) the points prefer sites closer to them and sites prefer points closer to them, and (ii) each site has a quota indicating the area of the set of points that can be matched to it. Thus, a stable-matching Voronoi diagram is a solution to the classic post office problem with the added (realistic) constraint that each post office has a limit on the size of its jurisdiction. Previous work provided existence and uniqueness proofs, but did not analyze its combinatorial or algorithmic complexity. We show that a stable-matching Voronoi diagram of n sites has O(n^{2+epsilon}) faces and edges, for any epsilon>0, and show that this bound is almost tight by giving a family of diagrams with Theta(n^2) faces and edges. We also provide a discrete algorithm for constructing it in O(n^3+n^2f(n)) time, where f(n) is the runtime of a geometric primitive that can be performed in the real-RAM model or can be approximated numerically. This is necessary, as the diagram cannot be computed exactly in an algebraic model of computation.

Cite as

Gill Barequet, David Eppstein, Michael T. Goodrich, and Nil Mamano. Stable-Matching Voronoi Diagrams: Combinatorial Complexity and Algorithms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 89:1-89:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{barequet_et_al:LIPIcs.ICALP.2018.89,
  author =	{Barequet, Gill and Eppstein, David and Goodrich, Michael T. and Mamano, Nil},
  title =	{{Stable-Matching Voronoi Diagrams: Combinatorial Complexity and Algorithms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{89:1--89:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.89},
  URN =		{urn:nbn:de:0030-drops-90937},
  doi =		{10.4230/LIPIcs.ICALP.2018.89},
  annote =	{Keywords: Voronoi diagram, stable matching, combinatorial complexity, lower bounds}
}
Document
Improved Algorithms for Adaptive Compressed Sensing

Authors: Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang


Abstract
In the problem of adaptive compressed sensing, one wants to estimate an approximately k-sparse vector x in R^n from m linear measurements A_1 x, A_2 x,..., A_m x, where A_i can be chosen based on the outcomes A_1 x,..., A_{i-1} x of previous measurements. The goal is to output a vector x^ for which |x-x^|_p <=C * min_{k-sparse x'} |x-x'|_q, with probability at least 2/3, where C > 0 is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for p=q=2 for C = 1+epsilon with O((k/epsilon) loglog (n/k)) measurements and O(log^*(k) loglog (n)) rounds of adaptivity. We first improve their bounds, obtaining a scheme with O(k * loglog (n/k) + (k/epsilon) * loglog(1/epsilon)) measurements and O(log^*(k) loglog (n)) rounds, as well as a scheme with O((k/epsilon) * loglog (n log (n/k))) measurements and an optimal O(loglog (n)) rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for (p,p) for every 0 < p < 2. We show that the improvement from O(k log(n/k)) measurements to O(k log log (n/k)) measurements in the adaptive setting can persist with a better epsilon-dependence for other values of p and q. For example, when (p,q) = (1,1), we obtain O(k/sqrt{epsilon} * log log n log^3 (1/epsilon)) measurements. We obtain nearly matching lower bounds, showing our algorithms are close to optimal. Along the way, we also obtain the first nearly-optimal bounds for (p,p) schemes for every 0 < p < 2 even in the non-adaptive setting.

Cite as

Vasileios Nakos, Xiaofei Shi, David P. Woodruff, and Hongyang Zhang. Improved Algorithms for Adaptive Compressed Sensing. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nakos_et_al:LIPIcs.ICALP.2018.90,
  author =	{Nakos, Vasileios and Shi, Xiaofei and Woodruff, David P. and Zhang, Hongyang},
  title =	{{Improved Algorithms for Adaptive Compressed Sensing}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.90},
  URN =		{urn:nbn:de:0030-drops-90945},
  doi =		{10.4230/LIPIcs.ICALP.2018.90},
  annote =	{Keywords: Compressed Sensing, Adaptivity, High-Dimensional Vectors}
}
Document
Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States

Authors: Chinmay Nirkhe, Umesh Vazirani, and Henry Yuen


Abstract
The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low-energy states cannot be generated by constant-depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming QCMA != QMA), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS. Finally, our techniques imply the existence of approximate quantum low-weight check (qLWC) codes with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity 1 - 1/poly(n). This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of O(sqrt{n log n}). The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.

Cite as

Chinmay Nirkhe, Umesh Vazirani, and Henry Yuen. Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 91:1-91:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nirkhe_et_al:LIPIcs.ICALP.2018.91,
  author =	{Nirkhe, Chinmay and Vazirani, Umesh and Yuen, Henry},
  title =	{{Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{91:1--91:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.91},
  URN =		{urn:nbn:de:0030-drops-90950},
  doi =		{10.4230/LIPIcs.ICALP.2018.91},
  annote =	{Keywords: quantum pcps, local hamiltonians, error-correcting codes}
}
Document
Fully Dynamic MIS in Uniformly Sparse Graphs

Authors: Krzysztof Onak, Baruch Schieber, Shay Solomon, and Nicole Wein


Abstract
We consider the problem of maintaining a maximal independent set (MIS) in a dynamic graph subject to edge insertions and deletions. Recently, Assadi, Onak, Schieber and Solomon (STOC 2018) showed that an MIS can be maintained in sublinear (in the dynamically changing number of edges) amortized update time. In this paper we significantly improve the update time for uniformly sparse graphs. Specifically, for graphs with arboricity alpha, the amortized update time of our algorithm is O(alpha^2 * log^2 n), where n is the number of vertices. For low arboricity graphs, which include, for example, minor-free graphs as well as some classes of "real world" graphs, our update time is polylogarithmic. Our update time improves the result of Assadi et al. for all graphs with arboricity bounded by m^{3/8 - epsilon}, for any constant epsilon > 0. This covers much of the range of possible values for arboricity, as the arboricity of a general graph cannot exceed m^{1/2}.

Cite as

Krzysztof Onak, Baruch Schieber, Shay Solomon, and Nicole Wein. Fully Dynamic MIS in Uniformly Sparse Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 92:1-92:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{onak_et_al:LIPIcs.ICALP.2018.92,
  author =	{Onak, Krzysztof and Schieber, Baruch and Solomon, Shay and Wein, Nicole},
  title =	{{Fully Dynamic MIS in Uniformly Sparse Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{92:1--92:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.92},
  URN =		{urn:nbn:de:0030-drops-90968},
  doi =		{10.4230/LIPIcs.ICALP.2018.92},
  annote =	{Keywords: dynamic graph algorithms, independent set, sparse graphs, graph arboricity}
}
Document
Strictly Balancing Matrices in Polynomial Time Using Osborne's Iteration

Authors: Rafail Ostrovsky, Yuval Rabani, and Arman Yousefi


Abstract
Osborne's iteration is a method for balancing n x n matrices which is widely used in linear algebra packages, as balancing preserves eigenvalues and stabilizes their numeral computation. The iteration can be implemented in any norm over R^n, but it is normally used in the L_2 norm. The choice of norm not only affects the desired balance condition, but also defines the iterated balancing step itself. In this paper we focus on Osborne's iteration in any L_p norm, where p < infty. We design a specific implementation of Osborne's iteration in any L_p norm that converges to a strictly epsilon-balanced matrix in O~(epsilon^{-2}n^{9} K) iterations, where K measures, roughly, the number of bits required to represent the entries of the input matrix. This is the first result that proves a variant of Osborne's iteration in the L_2 norm (or any L_p norm, p < infty) strictly balances matrices in polynomial time. This is a substantial improvement over our recent result (in SODA 2017) that showed weak balancing in L_p norms. Previously, Schulman and Sinclair (STOC 2015) showed strict balancing of another variant of Osborne's iteration in the L_infty norm. Their result does not imply any bounds on strict balancing in other norms.

Cite as

Rafail Ostrovsky, Yuval Rabani, and Arman Yousefi. Strictly Balancing Matrices in Polynomial Time Using Osborne's Iteration. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 93:1-93:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ostrovsky_et_al:LIPIcs.ICALP.2018.93,
  author =	{Ostrovsky, Rafail and Rabani, Yuval and Yousefi, Arman},
  title =	{{Strictly Balancing Matrices in Polynomial Time Using Osborne's Iteration}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{93:1--93:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.93},
  URN =		{urn:nbn:de:0030-drops-90976},
  doi =		{10.4230/LIPIcs.ICALP.2018.93},
  annote =	{Keywords: Numerical Linear Algebra, Optimization}
}
Document
Parameterized Algorithms for Zero Extension and Metric Labelling Problems

Authors: Felix Reidl and Magnus Wahlström


Abstract
We consider the problems Zero Extension and Metric Labelling under the paradigm of parameterized complexity. These are natural, well-studied problems with important applications, but have previously not received much attention from this area. Depending on the chosen cost function mu, we find that different algorithmic approaches can be applied to design FPT-algorithms: for arbitrary mu we parameterize by the number of edges that cross the cut (not the cost) and show how to solve Zero Extension in time O(|D|^{O(k^2)} n^4 log n) using randomized contractions. We improve this running time with respect to both parameter and input size to O(|D|^{O(k)} m) in the case where mu is a metric. We further show that the problem admits a polynomial sparsifier, that is, a kernel of size O(k^{|D|+1}) that is independent of the metric mu. With the stronger condition that mu is described by the distances of leaves in a tree, we parameterize by a gap parameter (q - p) between the cost of a true solution q and a `discrete relaxation' p and achieve a running time of O(|D|^{q-p} |T|m + |T|phi(n,m)) where T is the size of the tree over which mu is defined and phi(n,m) is the running time of a max-flow computation. We achieve a similar result for the more general Metric Labelling, while also allowing mu to be the distance metric between an arbitrary subset of nodes in a tree using tools from the theory of VCSPs. We expect the methods used in the latter result to have further applications.

Cite as

Felix Reidl and Magnus Wahlström. Parameterized Algorithms for Zero Extension and Metric Labelling Problems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 94:1-94:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{reidl_et_al:LIPIcs.ICALP.2018.94,
  author =	{Reidl, Felix and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Algorithms for Zero Extension and Metric Labelling Problems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{94:1--94:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.94},
  URN =		{urn:nbn:de:0030-drops-90989},
  doi =		{10.4230/LIPIcs.ICALP.2018.94},
  annote =	{Keywords: FPT, VCSP, cut problem, gap parameter}
}
Document
An Operational Characterization of Mutual Information in Algorithmic Information Theory

Authors: Andrei Romashchenko and Marius Zimand


Abstract
We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings x and y is equal, up to logarithmic precision, to the length of the longest shared secret key that two parties, one having x and the complexity profile of the pair and the other one having y and the complexity profile of the pair, can establish via a probabilistic protocol with interaction on a public channel. For l > 2, the longest shared secret that can be established from a tuple of strings (x_1, . . . , x_l) by l parties, each one having one component of the tuple and the complexity profile of the tuple, is equal, up to logarithmic precision, to the complexity of the tuple minus the minimum communication necessary for distributing the tuple to all parties. We establish the communication complexity of secret key agreement protocols that produce a secret key of maximal length, for protocols with public randomness. We also show that if the communication complexity drops below the established threshold then only very short secret keys can be obtained.

Cite as

Andrei Romashchenko and Marius Zimand. An Operational Characterization of Mutual Information in Algorithmic Information Theory. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 95:1-95:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{romashchenko_et_al:LIPIcs.ICALP.2018.95,
  author =	{Romashchenko, Andrei and Zimand, Marius},
  title =	{{An Operational Characterization of Mutual Information in Algorithmic Information Theory}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{95:1--95:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.95},
  URN =		{urn:nbn:de:0030-drops-90998},
  doi =		{10.4230/LIPIcs.ICALP.2018.95},
  annote =	{Keywords: Kolmogorov complexity, mutual information, communication complexity, secret key agreement}
}
Document
Privacy Preserving Clustering with Constraints

Authors: Clemens Rösner and Melanie Schmidt


Abstract
The k-center problem is a classical combinatorial optimization problem which asks to find k centers such that the maximum distance of any input point in a set P to its assigned center is minimized. The problem allows for elegant 2-approximations. However, the situation becomes significantly more difficult when constraints are added to the problem. We raise the question whether general methods can be derived to turn an approximation algorithm for a clustering problem with some constraints into an approximation algorithm that respects one constraint more. Our constraint of choice is privacy: Here, we are asked to only open a center when at least l clients will be assigned to it. We show how to combine privacy with several other constraints.

Cite as

Clemens Rösner and Melanie Schmidt. Privacy Preserving Clustering with Constraints. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 96:1-96:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{rosner_et_al:LIPIcs.ICALP.2018.96,
  author =	{R\"{o}sner, Clemens and Schmidt, Melanie},
  title =	{{Privacy Preserving Clustering with Constraints}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{96:1--96:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.96},
  URN =		{urn:nbn:de:0030-drops-91008},
  doi =		{10.4230/LIPIcs.ICALP.2018.96},
  annote =	{Keywords: Clustering, k-center, Constraints, Privacy, Lower Bounds, Fairness}
}
Document
NC Algorithms for Weighted Planar Perfect Matching and Related Problems

Authors: Piotr Sankowski


Abstract
Consider a planar graph G=(V,E) with polynomially bounded edge weight function w:E -> [0, poly(n)]. The main results of this paper are NC algorithms for finding minimum weight perfect matching in G. In order to solve this problems we develop a new relatively simple but versatile framework that is combinatorial in spirit. It handles the combinatorial structure of matchings directly and needs to only know weights of appropriately defined matchings from algebraic subroutines. Moreover, using novel planarity preserving reductions, we show how to find: maximum weight matching in G when G is bipartite; maximum multiple-source multiple-sink flow in G where c:E -> [1, poly(n)] is a polynomially bounded edge capacity function; minimum weight f-factor in G where f:V -> [1, poly(n)]; min-cost flow in G where c:E -> [1, poly(n)] is a polynomially bounded edge capacity function and b:V -> [1, poly(n)] is a polynomially bounded vertex demand function. There have been no known NC algorithms for these problems previously.

Cite as

Piotr Sankowski. NC Algorithms for Weighted Planar Perfect Matching and Related Problems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 97:1-97:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{sankowski:LIPIcs.ICALP.2018.97,
  author =	{Sankowski, Piotr},
  title =	{{NC Algorithms for Weighted Planar Perfect Matching and Related Problems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{97:1--97:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.97},
  URN =		{urn:nbn:de:0030-drops-91011},
  doi =		{10.4230/LIPIcs.ICALP.2018.97},
  annote =	{Keywords: planar graph, NC algorithms, maximum cardinality matching, maximum weight matching, min-cost flow, maximum multiple-source multiple-sink flow, f-factors}
}
Document
Computing Tutte Paths

Authors: Andreas Schmid and Jens M. Schmidt


Abstract
Tutte paths are one of the most successful tools for attacking problems on long cycles in planar graphs. Unfortunately, results based on them are non-constructive, as their proofs inherently use an induction on overlapping subgraphs and these overlaps prevent any attempt to bound the running time by a polynomial. For special cases however, computational results of Tutte paths are known: For 4-connected planar graphs, Tutte paths are in fact Hamiltonian paths and Chiba and Nishizeki [N. Chiba and T. Nishizeki, 1989] showed how to compute such paths in linear time. For 3-connected planar graphs, Tutte paths have a significantly more complicated structure, and it has only recently been shown that they can be computed in polynomial time [A. Schmid and J. M. Schmidt, 2015]. However, Tutte paths are defined for general 2-connected planar graphs and this is what most applications need. In this unrestricted setting, no computational results for Tutte paths are known. We give the first efficient algorithm that computes a Tutte path (in this unrestricted setting). One of the strongest existence results about such Tutte paths is due to Sanders [D. P. Sanders, 1997], which allows one to prescribe the end vertices and an intermediate edge of the desired path. Encompassing and strengthening all previous computational results on Tutte paths, we show how to compute such a special Tutte path efficiently. Our method refines both, the existence results of Thomassen [C. Thomassen, 1983] and Sanders [D. P. Sanders, 1997], and avoids that the subgraphs arising in the inductive proof intersect in more than one edge by using a novel iterative decomposition along 2-separators. Finally, we show that our algorithm runs in time O(n^2).

Cite as

Andreas Schmid and Jens M. Schmidt. Computing Tutte Paths. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 98:1-98:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{schmid_et_al:LIPIcs.ICALP.2018.98,
  author =	{Schmid, Andreas and Schmidt, Jens M.},
  title =	{{Computing Tutte Paths}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{98:1--98:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.98},
  URN =		{urn:nbn:de:0030-drops-91029},
  doi =		{10.4230/LIPIcs.ICALP.2018.98},
  annote =	{Keywords: Tutte Path, Tutte Cycle, 2-Connected Planar Graph, Hamiltonian Cycle}
}
Document
A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity

Authors: Tasuku Soma and Yuichi Yoshida


Abstract
In monotone submodular function maximization, approximation guarantees based on the curvature of the objective function have been extensively studied in the literature. However, the notion of curvature is often pessimistic, and we rarely obtain improved approximation guarantees, even for very simple objective functions. In this paper, we provide a novel approximation guarantee by extracting an M^{natural}-concave function h:2^E -> R_+, a notion in discrete convex analysis, from the objective function f:2^E -> R_+. We introduce a novel notion called the M^{natural}-concave curvature of a given set function f, which measures how much f deviates from an M^{natural}-concave function, and show that we can obtain a (1-gamma/e-epsilon)-approximation to the problem of maximizing f under a cardinality constraint in polynomial time, where gamma is the value of the M^{natural}-concave curvature and epsilon > 0 is an arbitrary constant. Then, we show that we can obtain nontrivial approximation guarantees for various problems by applying the proposed algorithm.

Cite as

Tasuku Soma and Yuichi Yoshida. A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 99:1-99:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{soma_et_al:LIPIcs.ICALP.2018.99,
  author =	{Soma, Tasuku and Yoshida, Yuichi},
  title =	{{A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{99:1--99:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.99},
  URN =		{urn:nbn:de:0030-drops-91033},
  doi =		{10.4230/LIPIcs.ICALP.2018.99},
  annote =	{Keywords: Submodular Function, Approximation Algorithm, Discrete Convex Analysis}
}
Document
Ring Packing and Amortized FHEW Bootstrapping

Authors: Daniele Miccianco and Jessica Sorrell


Abstract
The FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) offers very fast homomorphic NAND-gate computations (on encrypted data) and a relatively fast refreshing procedure that allows to homomorphically evaluate arbitrary NAND boolean circuits. Unfortunately, the refreshing procedure needs to be executed after every single NAND computation, and each refreshing operates on a single encrypted bit, greatly decreasing the overall throughput of the scheme. We give a new refreshing procedure that simultaneously refreshes n FHEW ciphertexts, at a cost comparable to a single-bit FHEW refreshing operation. As a result, the cost of each refreshing is amortized over n encrypted bits, improving the throughput for the homomorphic evaluation of boolean circuits roughly by a factor n.

Cite as

Daniele Miccianco and Jessica Sorrell. Ring Packing and Amortized FHEW Bootstrapping. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 100:1-100:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{miccianco_et_al:LIPIcs.ICALP.2018.100,
  author =	{Miccianco, Daniele and Sorrell, Jessica},
  title =	{{Ring Packing and Amortized FHEW Bootstrapping}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{100:1--100:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.100},
  URN =		{urn:nbn:de:0030-drops-91047},
  doi =		{10.4230/LIPIcs.ICALP.2018.100},
  annote =	{Keywords: homomorphic encryption, bootstrapping, lattice-based cryptography}
}
Document
Semi-random Graphs with Planted Sparse Vertex Cuts: Algorithms for Exact and Approximate Recovery

Authors: Anand Louis and Rakesh Venkat


Abstract
The problem of computing the vertex expansion of a graph is an NP-hard problem. The current best worst-case approximation guarantees for computing the vertex expansion of a graph are a O(sqrt{log n})-approximation algorithm due to Feige et al. [Uriel Feige et al., 2008], and O(sqrt{OPT log d}) bound in graphs having vertex degrees at most d due to Louis et al. [Louis et al., 2013]. We study a natural semi-random model of graphs with sparse vertex cuts. For certain ranges of parameters, we give an algorithm to recover the planted sparse vertex cut exactly. For a larger range of parameters, we give a constant factor bi-criteria approximation algorithm to compute the graph's balanced vertex expansion. Our algorithms are based on studying a semidefinite programming relaxation for the balanced vertex expansion of the graph. In addition to being a family of instances that will help us to better understand the complexity of the computation of vertex expansion, our model can also be used in the study of community detection where only a few nodes from each community interact with nodes from other communities. There has been a lot of work on studying random and semi-random graphs with planted sparse edge cuts. To the best of our knowledge, our model of semi-random graphs with planted sparse vertex cuts has not been studied before.

Cite as

Anand Louis and Rakesh Venkat. Semi-random Graphs with Planted Sparse Vertex Cuts: Algorithms for Exact and Approximate Recovery. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 101:1-101:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{louis_et_al:LIPIcs.ICALP.2018.101,
  author =	{Louis, Anand and Venkat, Rakesh},
  title =	{{Semi-random Graphs with Planted Sparse Vertex Cuts: Algorithms for Exact and Approximate Recovery}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{101:1--101:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.101},
  URN =		{urn:nbn:de:0030-drops-91057},
  doi =		{10.4230/LIPIcs.ICALP.2018.101},
  annote =	{Keywords: Semi-Random models, Vertex Expansion, Approximation Algorithms, Beyond Worst Case Analysis}
}
Document
Load Thresholds for Cuckoo Hashing with Overlapping Blocks

Authors: Stefan Walzer


Abstract

Cite as

Stefan Walzer. Load Thresholds for Cuckoo Hashing with Overlapping Blocks. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 102:1-102:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{walzer:LIPIcs.ICALP.2018.102,
  author =	{Walzer, Stefan},
  title =	{{Load Thresholds for Cuckoo Hashing with Overlapping Blocks}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{102:1--102:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.102},
  URN =		{urn:nbn:de:0030-drops-91068},
  doi =		{10.4230/LIPIcs.ICALP.2018.102},
  annote =	{Keywords: Cuckoo Hashing, Unaligned Blocks, Hypergraph Orientability, Load Thresholds, Randomised Algorithms}
}
Document
Brief Announcement
Brief Announcement: On Secure m-Party Computation, Commuting Permutation Systems and Unassisted Non-Interactive MPC

Authors: Navneet Agarwal, Sanat Anand, and Manoj Prabhakaran


Abstract
A fundamental problem in the theory of secure multi-party computation (MPC) is to characterize functions with more than 2 parties which admit MPC protocols with information-theoretic security against passive corruption. This question has seen little progress since the work of Chor and Ishai (2001), which demonstrated difficulties in resolving it. In this work, we make significant progress towards resolving this question in the important case of aggregating functionalities, in which m parties P1,...,Pm hold inputs x1,...,xm and an aggregating party P0 must learn f(x1,...,xm). We give a necessary condition and a slightly stronger sufficient condition for f to admit a secure protocol. Both the conditions are stated in terms of an algebraic structure we introduce called Commuting Permutations Systems (CPS), which may be of independent combinatorial interest. When our sufficiency condition is met, we obtain a perfectly secure protocol with minimal interaction, that fits the model of Non-Interactive MPC or NIMPC (Beimel et al., 2014), but without the need for a trusted party to generate correlated randomness. We define Unassisted Non-Interactive MPC (UNIMPC) to capture this variant. We also present an NIMPC protocol for all functionalities, which is simpler and more efficient than the one given in the prior work.

Cite as

Navneet Agarwal, Sanat Anand, and Manoj Prabhakaran. Brief Announcement: On Secure m-Party Computation, Commuting Permutation Systems and Unassisted Non-Interactive MPC. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 103:1-103:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ICALP.2018.103,
  author =	{Agarwal, Navneet and Anand, Sanat and Prabhakaran, Manoj},
  title =	{{Brief Announcement: On Secure m-Party Computation, Commuting Permutation Systems and Unassisted Non-Interactive MPC}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{103:1--103:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.103},
  URN =		{urn:nbn:de:0030-drops-91079},
  doi =		{10.4230/LIPIcs.ICALP.2018.103},
  annote =	{Keywords: Secure Multi-Party Computation, Combinatorial Characterization, Latin Hypercube, Permutation Hypercube Complex}
}
Document
Brief Announcement
Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network

Authors: Amy Babay, Michael Dinitz, and Zeyu Zhang


Abstract
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph G, a distance bound L, and p pairs of vertices {(s_i,t_i)}_{i in [p]}, the objective is to find a minimum-cost subgraph G' such that s_i and t_i have distance at most L in G' (for every i in [p]). Our main result is on the fixed-parameter tractability of this problem for parameter p. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W[1]-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W[1]-hard even to approximate.

Cite as

Amy Babay, Michael Dinitz, and Zeyu Zhang. Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 104:1-104:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{babay_et_al:LIPIcs.ICALP.2018.104,
  author =	{Babay, Amy and Dinitz, Michael and Zhang, Zeyu},
  title =	{{Brief Announcement: Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{104:1--104:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.104},
  URN =		{urn:nbn:de:0030-drops-91083},
  doi =		{10.4230/LIPIcs.ICALP.2018.104},
  annote =	{Keywords: Shallow-Light, Steiner Network, Fixed-Parameter Tractability}
}
Document
Brief Announcement
Brief Announcement: Zero-Knowledge Protocols for Search Problems

Authors: Ben Berger and Zvika Brakerski


Abstract
We consider natural ways to extend the notion of Zero-Knowledge (ZK) Proofs beyond decision problems. Specifically, we consider search problems, and define zero-knowledge proofs in this context as interactive protocols in which the prover can establish the correctness of a solution to a given instance without the verifier learning anything beyond the intended solution, even if it deviates from the protocol. The goal of this work is to initiate a study of Search Zero-Knowledge (search-ZK), the class of search problems for which such systems exist. This class trivially contains search problems where the validity of a solution can be efficiently verified (using a single message proof containing only the solution). A slightly less obvious, but still straightforward, way to obtain zero-knowledge proofs for search problems is to let the prover send a solution and prove in zero-knowledge that the instance-solution pair is valid. However, there may be other ways to obtain such zero-knowledge proofs, and they may be more advantageous. In fact, we prove that there are search problems for which the aforementioned approach fails, but still search zero-knowledge protocols exist. On the other hand, we show sufficient conditions for search problems under which some form of zero-knowledge can be obtained using the straightforward way.

Cite as

Ben Berger and Zvika Brakerski. Brief Announcement: Zero-Knowledge Protocols for Search Problems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 105:1-105:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{berger_et_al:LIPIcs.ICALP.2018.105,
  author =	{Berger, Ben and Brakerski, Zvika},
  title =	{{Brief Announcement: Zero-Knowledge Protocols for Search Problems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{105:1--105:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.105},
  URN =		{urn:nbn:de:0030-drops-91099},
  doi =		{10.4230/LIPIcs.ICALP.2018.105},
  annote =	{Keywords: Zero-Knowledge, Search Problems, Interactive Proofs}
}
Document
Brief Announcement
Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels

Authors: Jeremiah Blocki, Venkata Gandikota, Elena Grigorescu, and Samson Zhou


Abstract
We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, under the assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions compare favorably with existing schemes built under much stronger cryptographic assumptions, and with their classical analogues in the computationally unbounded, Hamming channel. Our constructions crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions.

Cite as

Jeremiah Blocki, Venkata Gandikota, Elena Grigorescu, and Samson Zhou. Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 106:1-106:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ICALP.2018.106,
  author =	{Blocki, Jeremiah and Gandikota, Venkata and Grigorescu, Elena and Zhou, Samson},
  title =	{{Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{106:1--106:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.106},
  URN =		{urn:nbn:de:0030-drops-91102},
  doi =		{10.4230/LIPIcs.ICALP.2018.106},
  annote =	{Keywords: Relaxed locally correctable codes, computationally bounded channels, local expanders}
}
Document
Brief Announcement
Brief Announcement: Approximation Schemes for Geometric Coverage Problems

Authors: Steven Chaplick, Minati De, Alexander Ravsky, and Joachim Spoerhase


Abstract
In this announcement, we show that the classical Maximum Coverage problem (MC) admits a PTAS via local search in essentially all cases where the corresponding instances of Set Cover (SC) admit a PTAS via the local search approach by Mustafa and Ray [Nabil H. Mustafa and Saurabh Ray, 2010]. As a corollary, we answer an open question by Badanidiyuru, Kleinberg, and Lee [Ashwinkumar Badanidiyuru et al., 2012] regarding half-spaces in R^3 thereby settling the existence of PTASs for essentially all natural cases of geometric MC problems. As an intermediate result, we show a color-balanced version of the classical planar subdivision theorem by Frederickson [Greg N. Frederickson, 1987]. We believe that some of our ideas may be useful for analyzing local search in other settings involving a hard cardinality constraint.

Cite as

Steven Chaplick, Minati De, Alexander Ravsky, and Joachim Spoerhase. Brief Announcement: Approximation Schemes for Geometric Coverage Problems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 107:1-107:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chaplick_et_al:LIPIcs.ICALP.2018.107,
  author =	{Chaplick, Steven and De, Minati and Ravsky, Alexander and Spoerhase, Joachim},
  title =	{{Brief Announcement: Approximation Schemes for Geometric Coverage Problems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{107:1--107:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.107},
  URN =		{urn:nbn:de:0030-drops-91113},
  doi =		{10.4230/LIPIcs.ICALP.2018.107},
  annote =	{Keywords: balanced separators, maximum coverage, local search, approximation scheme, geometric approximation algorithms}
}
Document
Brief Announcement
Brief Announcement: Bayesian Auctions with Efficient Queries

Authors: Jing Chen, Bo Li, Yingkai Li, and Pinyan Lu


Abstract
Generating good revenue is one of the most important problems in Bayesian auction design, and many (approximately) optimal dominant-strategy incentive compatible (DSIC) Bayesian mechanisms have been constructed for various auction settings. However, most existing studies do not consider the complexity for the seller to carry out the mechanism. It is assumed that the seller knows "each single bit" of the distributions and is able to optimize perfectly based on the entire distributions. Unfortunately this is a strong assumption and may not hold in reality: for example, when the value distributions have exponentially large supports or do not have succinct representations. In this work we consider, for the first time, the query complexity of Bayesian mechanisms. We only allow the seller to have limited oracle accesses to the players' value distributions, via quantile queries and value queries. For a large class of auction settings, we prove logarithmic lower-bounds for the query complexity for any DSIC Bayesian mechanism to be of any constant approximation to the optimal revenue. For single-item auctions and multi-item auctions with unit-demand or additive valuation functions, we prove tight upper-bounds via efficient query schemes, without requiring the distributions to be regular or have monotone hazard rate. Thus, in those auction settings the seller needs to access much less than the full distributions in order to achieve approximately optimal revenue.

Cite as

Jing Chen, Bo Li, Yingkai Li, and Pinyan Lu. Brief Announcement: Bayesian Auctions with Efficient Queries. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 108:1-108:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2018.108,
  author =	{Chen, Jing and Li, Bo and Li, Yingkai and Lu, Pinyan},
  title =	{{Brief Announcement: Bayesian Auctions with Efficient Queries}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{108:1--108:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.108},
  URN =		{urn:nbn:de:0030-drops-91124},
  doi =		{10.4230/LIPIcs.ICALP.2018.108},
  annote =	{Keywords: The complexity of Bayesian mechanisms, quantile queries, value queries}
}
Document
Brief Announcement
Brief Announcement: Hamming Distance Completeness and Sparse Matrix Multiplication

Authors: Daniel Graf, Karim Labib, and Przemyslaw Uznanski


Abstract
We show that a broad class of (+, diamond) vector products (for binary integer functions diamond) are equivalent under one-to-polylog reductions to the computation of the Hamming distance. Examples include: the dominance product, the threshold product and l_{2p+1} distances for constant p. Our results imply equivalence (up to poly log n factors) between complexity of computation of All Pairs: Hamming Distances, l_{2p+1} Distances, Dominance Products and Threshold Products. As a consequence, Yuster's (SODA'09) algorithm improves not only Matousek's (IPL'91), but also the results of Indyk, Lewenstein, Lipsky and Porat (ICALP'04) and Min, Kao and Zhu (COCOON'09). Furthermore, our reductions apply to the pattern matching setting, showing equivalence (up to poly log n factors) between pattern matching under Hamming Distance, l_{2p+1} Distance, Dominance Product and Threshold Product, with current best upperbounds due to results of Abrahamson (SICOMP'87), Amir and Farach (Ann. Math. Artif. Intell.'91), Atallah and Duket (IPL'11), Clifford, Clifford and Iliopoulous (CPM'05) and Amir, Lipsky, Porat and Umanski (CPM'05). The resulting algorithms for l_{2p+1} Pattern Matching and All Pairs l_{2p+1}, for 2p+1 = 3,5,7,... are new. Additionally, we show that the complexity of AllPairsHammingDistances (and thus of other aforementioned AllPairs- problems) is within poly log n from the time it takes to multiply matrices n x (n * d) and (n * d) x n, each with (n * d) non-zero entries. This means that the current upperbounds by Yuster (SODA'09) cannot be improved without improving the sparse matrix multiplication algorithm by Yuster and Zwick (ACM TALG'05) and vice versa.

Cite as

Daniel Graf, Karim Labib, and Przemyslaw Uznanski. Brief Announcement: Hamming Distance Completeness and Sparse Matrix Multiplication. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 109:1-109:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{graf_et_al:LIPIcs.ICALP.2018.109,
  author =	{Graf, Daniel and Labib, Karim and Uznanski, Przemyslaw},
  title =	{{Brief Announcement: Hamming Distance Completeness and Sparse Matrix Multiplication}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{109:1--109:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.109},
  URN =		{urn:nbn:de:0030-drops-91134},
  doi =		{10.4230/LIPIcs.ICALP.2018.109},
  annote =	{Keywords: fine-grained complexity, matrix multiplication, high dimensional geometry, pattern matching}
}
Document
Brief Announcement
Brief Announcement: Treewidth Modulator: Emergency Exit for DFVS

Authors: Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh, Roohani Sharma, and Meirav Zehavi


Abstract
In the Directed Feedback Vertex Set (DFVS) problem, we are given as input a directed graph D and an integer k, and the objective is to check whether there exists a set S of at most k vertices such that F=D-S is a directed acyclic graph (DAG). Determining whether DFVS admits a polynomial kernel (parameterized by the solution size) is one of the most important open problems in parameterized complexity. In this article, we give a polynomial kernel for DFVS parameterized by the solution size plus the size of any treewidth-eta modulator, for any positive integer eta. We also give a polynomial kernel for the problem, which we call Vertex Deletion to treewidth-eta DAG, where given as input a directed graph D and a positive integer k, the objective is to decide whether there exists a set of at most k vertices, say S, such that D-S is a DAG and the treewidth of D-S is at most eta.

Cite as

Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Brief Announcement: Treewidth Modulator: Emergency Exit for DFVS. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 110:1-110:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ICALP.2018.110,
  author =	{Lokshtanov, Daniel and Ramanujan, M. S. and Saurabh, Saket and Sharma, Roohani and Zehavi, Meirav},
  title =	{{Brief Announcement: Treewidth Modulator: Emergency Exit for DFVS}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{110:1--110:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.110},
  URN =		{urn:nbn:de:0030-drops-91146},
  doi =		{10.4230/LIPIcs.ICALP.2018.110},
  annote =	{Keywords: Polynomial Kernel, Directed Feedback Vertex Set, Treewidth Modulator}
}
Document
Brief Announcement
Brief Announcement: Erasure-Resilience Versus Tolerance to Errors

Authors: Sofya Raskhodnikova and Nithin Varma


Abstract
We describe work in progress on providing a separation between erasure-resilient and tolerant property testing. Specifically, we are able to exhibit a property which is testable (with the number of queries independent of the length of the input) in the presence of erasures, but is not testable tolerantly.

Cite as

Sofya Raskhodnikova and Nithin Varma. Brief Announcement: Erasure-Resilience Versus Tolerance to Errors. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 111:1-111:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{raskhodnikova_et_al:LIPIcs.ICALP.2018.111,
  author =	{Raskhodnikova, Sofya and Varma, Nithin},
  title =	{{Brief Announcement: Erasure-Resilience Versus Tolerance to Errors}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{111:1--111:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.111},
  URN =		{urn:nbn:de:0030-drops-91154},
  doi =		{10.4230/LIPIcs.ICALP.2018.111},
  annote =	{Keywords: Property testing, erasures, tolerance to errors, model separation}
}
Document
Brief Announcement
Brief Announcement: Bounded-Degree Cut is Fixed-Parameter Tractable

Authors: Mingyu Xiao and Hiroshi Nagamochi


Abstract
In the bounded-degree cut problem, we are given a multigraph G=(V,E), two disjoint vertex subsets A,B subseteq V, two functions u_A, u_B:V -> {0,1,...,|E|} on V, and an integer k >= 0. The task is to determine whether there is a minimal (A,B)-cut (V_A,V_B) of size at most k such that the degree of each vertex v in V_A in the induced subgraph G[V_A] is at most u_A(v) and the degree of each vertex v in V_B in the induced subgraph G[V_B] is at most u_B(v). In this paper, we show that the bounded-degree cut problem is fixed-parameter tractable by giving a 2^{18k}|G|^{O(1)}-time algorithm. This is the first single exponential FPT algorithm for this problem. The core of the algorithm lies two new lemmas based on important cuts, which give some upper bounds on the number of candidates for vertex subsets in one part of a minimal cut satisfying some properties. These lemmas can be used to design fixed-parameter tractable algorithms for more related problems.

Cite as

Mingyu Xiao and Hiroshi Nagamochi. Brief Announcement: Bounded-Degree Cut is Fixed-Parameter Tractable. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 112:1-112:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{xiao_et_al:LIPIcs.ICALP.2018.112,
  author =	{Xiao, Mingyu and Nagamochi, Hiroshi},
  title =	{{Brief Announcement: Bounded-Degree Cut is Fixed-Parameter Tractable}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{112:1--112:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.112},
  URN =		{urn:nbn:de:0030-drops-91164},
  doi =		{10.4230/LIPIcs.ICALP.2018.112},
  annote =	{Keywords: FPT, Important Cuts, Graph Cuts, Graph Algorithms}
}
Document
Almost Sure Productivity

Authors: Alejandro Aguirre, Gilles Barthe, Justin Hsu, and Alexandra Silva


Abstract
We introduce Almost Sure Productivity (ASP), a probabilistic generalization of the productivity condition for coinductively defined structures. Intuitively, a probabilistic coinductive stream or tree is ASP if it produces infinitely many outputs with probability 1. Formally, we define ASP using a final coalgebra semantics of programs inspired by Kerstan and König. Then, we introduce a core language for probabilistic streams and trees, and provide two approaches to verify ASP: a syntactic sufficient criterion, and a decision procedure by reduction to model{-}checking LTL formulas on probabilistic pushdown automata.

Cite as

Alejandro Aguirre, Gilles Barthe, Justin Hsu, and Alexandra Silva. Almost Sure Productivity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 113:1-113:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aguirre_et_al:LIPIcs.ICALP.2018.113,
  author =	{Aguirre, Alejandro and Barthe, Gilles and Hsu, Justin and Silva, Alexandra},
  title =	{{Almost Sure Productivity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{113:1--113:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.113},
  URN =		{urn:nbn:de:0030-drops-91174},
  doi =		{10.4230/LIPIcs.ICALP.2018.113},
  annote =	{Keywords: Coinduction, Probabilistic Programming, Productivity}
}
Document
O-Minimal Invariants for Linear Loops

Authors: Shaull Almagor, Dmitry Chistikov, Joël Ouaknine, and James Worrell


Abstract
The termination analysis of linear loops plays a key rôle in several areas of computer science, including program verification and abstract interpretation. Such deceptively simple questions also relate to a number of deep open problems, such as the decidability of the Skolem and Positivity Problems for linear recurrence sequences, or equivalently reachability questions for discrete-time linear dynamical systems. In this paper, we introduce the class of o-minimal invariants, which is broader than any previously considered, and study the decidability of the existence and algorithmic synthesis of such invariants as certificates of non-termination for linear loops equipped with a large class of halting conditions. We establish two main decidability results, one of them conditional on Schanuel's conjecture in transcendental number theory.

Cite as

Shaull Almagor, Dmitry Chistikov, Joël Ouaknine, and James Worrell. O-Minimal Invariants for Linear Loops. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 114:1-114:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{almagor_et_al:LIPIcs.ICALP.2018.114,
  author =	{Almagor, Shaull and Chistikov, Dmitry and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{O-Minimal Invariants for Linear Loops}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{114:1--114:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.114},
  URN =		{urn:nbn:de:0030-drops-91188},
  doi =		{10.4230/LIPIcs.ICALP.2018.114},
  annote =	{Keywords: Invariants, linear loops, linear dynamical systems, non-termination, o-minimality}
}
Document
Topological Sorting with Regular Constraints

Authors: Antoine Amarilli and Charles Paperman


Abstract
We introduce the constrained topological sorting problem (CTS): given a regular language K and a directed acyclic graph G with labeled vertices, determine if G has a topological sort that forms a word in K. This natural problem applies to several settings, e.g., scheduling with costs or verifying concurrent programs. We consider the problem CTS[K] where the target language K is fixed, and study its complexity depending on K. We show that CTS[K] is tractable when K falls in several language families, e.g., unions of monomials, which can be used for pattern matching. However, we show that CTS[K] is NP-hard for K = (ab)^* and introduce a shuffle reduction technique to show hardness for more languages. We also study the special case of the constrained shuffle problem (CSh), where the input graph is a disjoint union of strings, and show that CSh[K] is additionally tractable when K is a group language or a union of district group monomials. We conjecture that a dichotomy should hold on the complexity of CTS[K] or CSh[K] depending on K, and substantiate this by proving a coarser dichotomy under a different problem phrasing which ensures that tractable languages are closed under common operators.

Cite as

Antoine Amarilli and Charles Paperman. Topological Sorting with Regular Constraints. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 115:1-115:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{amarilli_et_al:LIPIcs.ICALP.2018.115,
  author =	{Amarilli, Antoine and Paperman, Charles},
  title =	{{Topological Sorting with Regular Constraints}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{115:1--115:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.115},
  URN =		{urn:nbn:de:0030-drops-91193},
  doi =		{10.4230/LIPIcs.ICALP.2018.115},
  annote =	{Keywords: Topological sorting, shuffle problem, regular language}
}
Document
On Zero-One and Convergence Laws for Graphs Embeddable on a Fixed Surface

Authors: Albert Atserias, Stephan Kreutzer, and Marc Noy


Abstract
We show that for no surface except for the plane does monadic second-order logic (MSO) have a zero-one-law - and not even a convergence law - on the class of (connected) graphs embeddable on the surface. In addition we show that every rational in [0,1] is the limiting probability of some MSO formula. This strongly refutes a conjecture by Heinig et al. (2014) who proved a convergence law for planar graphs, and a zero-one law for connected planar graphs, and also identified the so-called gaps of [0,1]: the subintervals that are not limiting probabilities of any MSO formula. The proof relies on a combination of methods from structural graph theory, especially large face-width embeddings of graphs on surfaces, analytic combinatorics, and finite model theory, and several parts of the proof may be of independent interest. In particular, we identify precisely the properties that make the zero-one law work on planar graphs but fail for every other surface.

Cite as

Albert Atserias, Stephan Kreutzer, and Marc Noy. On Zero-One and Convergence Laws for Graphs Embeddable on a Fixed Surface. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 116:1-116:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{atserias_et_al:LIPIcs.ICALP.2018.116,
  author =	{Atserias, Albert and Kreutzer, Stephan and Noy, Marc},
  title =	{{On Zero-One and Convergence Laws for Graphs Embeddable on a Fixed Surface}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{116:1--116:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.116},
  URN =		{urn:nbn:de:0030-drops-91206},
  doi =		{10.4230/LIPIcs.ICALP.2018.116},
  annote =	{Keywords: topological graph theory, monadic second-order logic, random graphs, zero-one law, convergence law}
}
Document
Bisimulation Invariant Monadic-Second Order Logic in the Finite

Authors: Achim Blumensath and Felix Wolf


Abstract
We consider bisimulation-invariant monadic second-order logic over various classes of finite transition systems. We present several combinatorial characterisations of when the expressive power of this fragment coincides with that of the modal mu-calculus. Using these characterisations we prove for some simple classes of transition systems that this is indeed the case. In particular, we show that, over the class of all finite transition systems with Cantor-Bendixson rank at most k, bisimulation-invariant MSO coincides with L_mu.

Cite as

Achim Blumensath and Felix Wolf. Bisimulation Invariant Monadic-Second Order Logic in the Finite. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 117:1-117:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{blumensath_et_al:LIPIcs.ICALP.2018.117,
  author =	{Blumensath, Achim and Wolf, Felix},
  title =	{{Bisimulation Invariant Monadic-Second Order Logic in the Finite}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{117:1--117:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.117},
  URN =		{urn:nbn:de:0030-drops-91215},
  doi =		{10.4230/LIPIcs.ICALP.2018.117},
  annote =	{Keywords: bisimulation, monadic second-order logic, composition method}
}
Document
Binary Reachability of Timed Pushdown Automata via Quantifier Elimination and Cyclic Order Atoms

Authors: Lorenzo Clemente and Slawomir Lasota


Abstract
We study an expressive model of timed pushdown automata extended with modular and fractional clock constraints. We show that the binary reachability relation is effectively expressible in hybrid linear arithmetic with a rational and an integer sort. This subsumes analogous expressibility results previously known for finite and pushdown timed automata with untimed stack. As key technical tools, we use quantifier elimination for a fragment of hybrid linear arithmetic and for cyclic order atoms, and a reduction to register pushdown automata over cyclic order atoms.

Cite as

Lorenzo Clemente and Slawomir Lasota. Binary Reachability of Timed Pushdown Automata via Quantifier Elimination and Cyclic Order Atoms. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 118:1-118:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{clemente_et_al:LIPIcs.ICALP.2018.118,
  author =	{Clemente, Lorenzo and Lasota, Slawomir},
  title =	{{Binary Reachability of Timed Pushdown Automata via Quantifier Elimination and Cyclic Order Atoms}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{118:1--118:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.118},
  URN =		{urn:nbn:de:0030-drops-91228},
  doi =		{10.4230/LIPIcs.ICALP.2018.118},
  annote =	{Keywords: timed automata, reachability relation, timed pushdown automata, linear arithmetic}
}
Document
Unboundedness Problems for Languages of Vector Addition Systems

Authors: Wojciech Czerwinski, Piotr Hofman, and Georg Zetzsche


Abstract
A vector addition system (VAS) with an initial and a final marking and transition labels induces a language. In part because the reachability problem in VAS remains far from being well-understood, it is difficult to devise decision procedures for such languages. This is especially true for checking properties that state the existence of infinitely many words of a particular shape. Informally, we call these unboundedness properties. We present a simple set of axioms for predicates that can express unboundedness properties. Our main result is that such a predicate is decidable for VAS languages as soon as it is decidable for regular languages. Among other results, this allows us to show decidability of (i) separability by bounded regular languages, (ii) unboundedness of occurring factors from a language K with mild conditions on K, and (iii) universality of the set of factors.

Cite as

Wojciech Czerwinski, Piotr Hofman, and Georg Zetzsche. Unboundedness Problems for Languages of Vector Addition Systems. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 119:1-119:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{czerwinski_et_al:LIPIcs.ICALP.2018.119,
  author =	{Czerwinski, Wojciech and Hofman, Piotr and Zetzsche, Georg},
  title =	{{Unboundedness Problems for Languages of Vector Addition Systems}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{119:1--119:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.119},
  URN =		{urn:nbn:de:0030-drops-91235},
  doi =		{10.4230/LIPIcs.ICALP.2018.119},
  annote =	{Keywords: vector addition systems, decision problems, unboundedness, separability}
}
Document
Reachability and Distances under Multiple Changes

Authors: Samir Datta, Anish Mukherjee, Nils Vortmeier, and Thomas Zeume


Abstract
Recently it was shown that the transitive closure of a directed graph can be updated using first-order formulas after insertions and deletions of single edges in the dynamic descriptive complexity framework by Dong, Su, and Topor, and Patnaik and Immerman. In other words, Reachability is in DynFO. In this article we extend the framework to changes of multiple edges at a time, and study the Reachability and Distance queries under these changes. We show that the former problem can be maintained in DynFO(+, x) under changes affecting O({log n}/{log log n}) nodes, for graphs with n nodes. If the update formulas may use a majority quantifier then both Reachability and Distance can be maintained under changes that affect O(log^c n) nodes, for fixed c in N. Some preliminary results towards showing that distances are in DynFO are discussed.

Cite as

Samir Datta, Anish Mukherjee, Nils Vortmeier, and Thomas Zeume. Reachability and Distances under Multiple Changes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 120:1-120:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.ICALP.2018.120,
  author =	{Datta, Samir and Mukherjee, Anish and Vortmeier, Nils and Zeume, Thomas},
  title =	{{Reachability and Distances under Multiple Changes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{120:1--120:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.120},
  URN =		{urn:nbn:de:0030-drops-91245},
  doi =		{10.4230/LIPIcs.ICALP.2018.120},
  annote =	{Keywords: dynamic complexity, reachability, distances, complex changes}
}
Document
When is Containment Decidable for Probabilistic Automata?

Authors: Laure Daviaud, Marcin Jurdzinski, Ranko Lazic, Filip Mazowiecki, Guillermo A. Pérez, and James Worrell


Abstract
The containment problem for quantitative automata is the natural quantitative generalisation of the classical language inclusion problem for Boolean automata. We study it for probabilistic automata, where it is known to be undecidable in general. We restrict our study to the class of probabilistic automata with bounded ambiguity. There, we show decidability (subject to Schanuel's conjecture) when one of the automata is assumed to be unambiguous while the other one is allowed to be finitely ambiguous. Furthermore, we show that this is close to the most general decidable fragment of this problem by proving that it is already undecidable if one of the automata is allowed to be linearly ambiguous.

Cite as

Laure Daviaud, Marcin Jurdzinski, Ranko Lazic, Filip Mazowiecki, Guillermo A. Pérez, and James Worrell. When is Containment Decidable for Probabilistic Automata?. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 121:1-121:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{daviaud_et_al:LIPIcs.ICALP.2018.121,
  author =	{Daviaud, Laure and Jurdzinski, Marcin and Lazic, Ranko and Mazowiecki, Filip and P\'{e}rez, Guillermo A. and Worrell, James},
  title =	{{When is Containment Decidable for Probabilistic Automata?}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{121:1--121:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.121},
  URN =		{urn:nbn:de:0030-drops-91251},
  doi =		{10.4230/LIPIcs.ICALP.2018.121},
  annote =	{Keywords: Probabilistic automata, Containment, Emptiness, Ambiguity}
}
Document
On the Complexity of Infinite Advice Strings

Authors: Gaëtan Douéneau-Tabot


Abstract
We investigate in this paper a notion of comparison between infinite strings. In a general way, if M is a computation model (e.g. Turing machines) and C a class of objects (e.g. languages), the complexity of an infinite word alpha can be measured with respect to the amount of objects from C that are presentable with machines from M using alpha as an oracle. In our case, the model M is finite automata and the objects C are either recognized languages or presentable structures, known respectively as advice regular languages and advice automatic structures. This leads to several different classifications of infinite words that are studied in detail; we also derive logical and computational equivalent measures. Our main results explore the connections between classes of advice automatic structures, MSO-transductions and two-way transducers. They suggest a closer study of the resulting hierarchy over infinite words.

Cite as

Gaëtan Douéneau-Tabot. On the Complexity of Infinite Advice Strings. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 122:1-122:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{doueneautabot:LIPIcs.ICALP.2018.122,
  author =	{Dou\'{e}neau-Tabot, Ga\"{e}tan},
  title =	{{On the Complexity of Infinite Advice Strings}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{122:1--122:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.122},
  URN =		{urn:nbn:de:0030-drops-91260},
  doi =		{10.4230/LIPIcs.ICALP.2018.122},
  annote =	{Keywords: infinite words, advice automata, automatic structures, transducers}
}
Document
Resynchronizing Classes of Word Relations

Authors: María Emilia Descotte, Diego Figueira, and Gabriele Puppis


Abstract
A natural approach to define binary word relations over a finite alphabet A is through two-tape finite state automata that recognize regular languages over {1, 2} x A, where (i,a) is interpreted as reading letter a from tape i. Accordingly, a word w in L denotes the pair (u_1,u_2) in A^* x A^* in which u_i is the projection of w onto i-labelled letters. While this formalism defines the well-studied class of Rational relations (a.k.a. non-deterministic finite state transducers), enforcing restrictions on the reading regime from the tapes, which we call synchronization, yields various sub-classes of relations. Such synchronization restrictions are imposed through regular properties on the projection of the language onto {1,2}. In this way, for each regular language C subseteq {1,2}^*, one obtains a class Rel({C}) of relations. Regular, Recognizable, and length-preserving rational relations are all examples of classes that can be defined in this way. We study the problem of containment for synchronized classes of relations: given C,D subseteq {1,2}^*, is Rel({C}) subseteq Rel({D})? We show a characterization in terms of C and D which gives a decidability procedure to test for class inclusion. This also yields a procedure to re-synchronize languages from {1, 2} x A preserving the denoted relation whenever the inclusion holds.

Cite as

María Emilia Descotte, Diego Figueira, and Gabriele Puppis. Resynchronizing Classes of Word Relations. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 123:1-123:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{descotte_et_al:LIPIcs.ICALP.2018.123,
  author =	{Descotte, Mar{\'\i}a Emilia and Figueira, Diego and Puppis, Gabriele},
  title =	{{Resynchronizing Classes of Word Relations}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{123:1--123:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.123},
  URN =		{urn:nbn:de:0030-drops-91270},
  doi =		{10.4230/LIPIcs.ICALP.2018.123},
  annote =	{Keywords: synchronized word relations, containment, resynchronization}
}
Document
Reachability Switching Games

Authors: John Fearnley, Martin Gairing, Matthias Mnich, and Rahul Savani


Abstract
In this paper, we study the problem of deciding the winner of reachability switching games. We study zero-, one-, and two-player variants of these games. We show that the zero-player case is NL-hard, the one-player case is NP-complete, and that the two-player case is PSPACE-hard and in EXPTIME. For the zero-player case, we also show P-hardness for a succinctly-represented model that maintains the upper bound of NP n coNP. For the one- and two-player cases, our results hold in both the natural, explicit model and succinctly-represented model. We also study the structure of winning strategies in these games, and in particular we show that exponential memory is required in both the one- and two-player settings.

Cite as

John Fearnley, Martin Gairing, Matthias Mnich, and Rahul Savani. Reachability Switching Games. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 124:1-124:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fearnley_et_al:LIPIcs.ICALP.2018.124,
  author =	{Fearnley, John and Gairing, Martin and Mnich, Matthias and Savani, Rahul},
  title =	{{Reachability Switching Games}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{124:1--124:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.124},
  URN =		{urn:nbn:de:0030-drops-91282},
  doi =		{10.4230/LIPIcs.ICALP.2018.124},
  annote =	{Keywords: Deterministic Random Walks, Model Checking, Reachability, Simple Stochastic Game, Switching Systems}
}
Document
Costs and Rewards in Priced Timed Automata

Authors: Martin Fränzle, Mahsa Shirmohammadi, Mani Swaminathan, and James Worrell


Abstract
We consider Pareto analysis of reachable states of multi-priced timed automata (MPTA): timed automata equipped with multiple observers that keep track of costs (to be minimised) and rewards (to be maximised) along a computation. Each observer has a constant non-negative derivative which may depend on the location of the MPTA. We study the Pareto Domination Problem, which asks whether it is possible to reach a target location via a run in which the accumulated costs and rewards Pareto dominate a given objective vector. We show that this problem is undecidable in general, but decidable for MPTA with at most three observers. For MPTA whose observers are all costs or all rewards, we show that the Pareto Domination Problem is PSPACE-complete. We also consider an epsilon-approximate Pareto Domination Problem that is decidable without restricting the number and types of observers. We develop connections between MPTA and Diophantine equations. Undecidability of the Pareto Domination Problem is shown by reduction from Hilbert's 10^{th} Problem, while decidability for three observers is shown by a translation to a fragment of arithmetic involving quadratic forms.

Cite as

Martin Fränzle, Mahsa Shirmohammadi, Mani Swaminathan, and James Worrell. Costs and Rewards in Priced Timed Automata. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 125:1-125:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{franzle_et_al:LIPIcs.ICALP.2018.125,
  author =	{Fr\"{a}nzle, Martin and Shirmohammadi, Mahsa and Swaminathan, Mani and Worrell, James},
  title =	{{Costs and Rewards in Priced Timed Automata}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{125:1--125:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.125},
  URN =		{urn:nbn:de:0030-drops-91297},
  doi =		{10.4230/LIPIcs.ICALP.2018.125},
  annote =	{Keywords: Priced Timed Automata, Pareto Domination, Diophantine Equations}
}
Document
First-Order Interpretations of Bounded Expansion Classes

Authors: Jakub Gajarský, Stephan Kreutzer, Jaroslav Nesetril, Patrice Ossona de Mendez, Michal Pilipczuk, Sebastian Siebertz, and Szymon Torunczyk


Abstract
The notion of bounded expansion captures uniform sparsity of graph classes and renders various algorithmic problems that are hard in general tractable. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over such graph classes. With the aim of generalizing such results to dense graphs, we introduce classes of graphs with structurally bounded expansion, defined as first-order interpretations of classes of bounded expansion. As a first step towards their algorithmic treatment, we provide their characterization analogous to the characterization of classes of bounded expansion via low treedepth decompositions, replacing treedepth by its dense analogue called shrubdepth.

Cite as

Jakub Gajarský, Stephan Kreutzer, Jaroslav Nesetril, Patrice Ossona de Mendez, Michal Pilipczuk, Sebastian Siebertz, and Szymon Torunczyk. First-Order Interpretations of Bounded Expansion Classes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 126:1-126:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gajarsky_et_al:LIPIcs.ICALP.2018.126,
  author =	{Gajarsk\'{y}, Jakub and Kreutzer, Stephan and Nesetril, Jaroslav and Ossona de Mendez, Patrice and Pilipczuk, Michal and Siebertz, Sebastian and Torunczyk, Szymon},
  title =	{{First-Order Interpretations of Bounded Expansion Classes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{126:1--126:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.126},
  URN =		{urn:nbn:de:0030-drops-91300},
  doi =		{10.4230/LIPIcs.ICALP.2018.126},
  annote =	{Keywords: Logical interpretations/transductions, structurally sparse graphs, bounded expansion}
}
Document
Randomized Sliding Window Algorithms for Regular Languages

Authors: Moses Ganardi, Danny Hucke, and Markus Lohrey


Abstract
A sliding window algorithm receives a stream of symbols and has to output at each time instant a certain value which only depends on the last n symbols. If the algorithm is randomized, then at each time instant it produces an incorrect output with probability at most epsilon, which is a constant error bound. This work proposes a more relaxed definition of correctness which is parameterized by the error bound epsilon and the failure ratio phi: a randomized sliding window algorithm is required to err with probability at most epsilon at a portion of 1-phi of all time instants of an input stream. This work continues the investigation of sliding window algorithms for regular languages. In previous works a trichotomy theorem was shown for deterministic algorithms: the optimal space complexity is either constant, logarithmic or linear in the window size. The main results of this paper concerns three natural settings (randomized algorithms with failure ratio zero and randomized/deterministic algorithms with bounded failure ratio) and provide natural language theoretic characterizations of the space complexity classes.

Cite as

Moses Ganardi, Danny Hucke, and Markus Lohrey. Randomized Sliding Window Algorithms for Regular Languages. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 127:1-127:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ganardi_et_al:LIPIcs.ICALP.2018.127,
  author =	{Ganardi, Moses and Hucke, Danny and Lohrey, Markus},
  title =	{{Randomized Sliding Window Algorithms for Regular Languages}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{127:1--127:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.127},
  URN =		{urn:nbn:de:0030-drops-91317},
  doi =		{10.4230/LIPIcs.ICALP.2018.127},
  annote =	{Keywords: sliding windows, regular languages, randomized complexity}
}
Document
Aperiodic Points in Z²-subshifts

Authors: Anael Grandjean, Benjamin Hellouin de Menibus, and Pascal Vanier


Abstract
We consider the structure of aperiodic points in Z^2-subshifts, and in particular the positions at which they fail to be periodic. We prove that if a Z^2-subshift contains points whose smallest period is arbitrarily large, then it contains an aperiodic point. This lets us characterise the computational difficulty of deciding if an Z^2-subshift of finite type contains an aperiodic point. Another consequence is that Z^2-subshifts with no aperiodic point have a very strong dynamical structure and are almost topologically conjugate to some Z-subshift. Finally, we use this result to characterize sets of possible slopes of periodicity for Z^3-subshifts of finite type.

Cite as

Anael Grandjean, Benjamin Hellouin de Menibus, and Pascal Vanier. Aperiodic Points in Z²-subshifts. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 128:1-128:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grandjean_et_al:LIPIcs.ICALP.2018.128,
  author =	{Grandjean, Anael and Hellouin de Menibus, Benjamin and Vanier, Pascal},
  title =	{{Aperiodic Points in Z²-subshifts}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{128:1--128:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.128},
  URN =		{urn:nbn:de:0030-drops-91323},
  doi =		{10.4230/LIPIcs.ICALP.2018.128},
  annote =	{Keywords: Subshifts of finite type, Wang tiles, periodicity, aperiodicity, computability, tilings}
}
Document
Semicomputable Geometry

Authors: Mathieu Hoyrup, Diego Nava Saucedo, and Don M. Stull


Abstract
Computability and semicomputability of compact subsets of the Euclidean spaces are important notions, that have been investigated for many classes of sets including fractals (Julia sets, Mandelbrot set) and objects with geometrical or topological constraints (embedding of a sphere). In this paper we investigate one of the simplest classes, namely the filled triangles in the plane. We study the properties of the parameters of semicomputable triangles, such as the coordinates of their vertices. This problem is surprisingly rich. We introduce and develop a notion of semicomputability of points of the plane which is a generalization in dimension 2 of the left-c.e. and right-c.e. numbers. We relate this notion to Solovay reducibility. We show that semicomputable triangles admit no finite parametrization, for some notion of parametrization.

Cite as

Mathieu Hoyrup, Diego Nava Saucedo, and Don M. Stull. Semicomputable Geometry. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 129:1-129:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hoyrup_et_al:LIPIcs.ICALP.2018.129,
  author =	{Hoyrup, Mathieu and Nava Saucedo, Diego and Stull, Don M.},
  title =	{{Semicomputable Geometry}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{129:1--129:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.129},
  URN =		{urn:nbn:de:0030-drops-91336},
  doi =		{10.4230/LIPIcs.ICALP.2018.129},
  annote =	{Keywords: Computable set, Semicomputable set, Solovay reducibility, Left-ce real, Genericity}
}
Document
On Computing the Total Variation Distance of Hidden Markov Models

Authors: Stefan Kiefer


Abstract
We prove results on the decidability and complexity of computing the total variation distance (equivalently, the L_1-distance) of hidden Markov models (equivalently, labelled Markov chains). This distance measures the difference between the distributions on words that two hidden Markov models induce. The main results are: (1) it is undecidable whether the distance is greater than a given threshold; (2) approximation is #P-hard and in PSPACE.

Cite as

Stefan Kiefer. On Computing the Total Variation Distance of Hidden Markov Models. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 130:1-130:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kiefer:LIPIcs.ICALP.2018.130,
  author =	{Kiefer, Stefan},
  title =	{{On Computing the Total Variation Distance of Hidden Markov Models}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{130:1--130:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.130},
  URN =		{urn:nbn:de:0030-drops-91344},
  doi =		{10.4230/LIPIcs.ICALP.2018.130},
  annote =	{Keywords: Labelled Markov Chains, Hidden Markov Models, Distance, Decidability, Complexity}
}
Document
To Infinity and Beyond

Authors: Ines Klimann


Abstract
We prove that if a group generated by a bireversible Mealy automaton contains an element of infinite order, then it must have exponential growth. As a direct consequence, no infinite virtually nilpotent group can be generated by a bireversible Mealy automaton.

Cite as

Ines Klimann. To Infinity and Beyond. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 131:1-131:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{klimann:LIPIcs.ICALP.2018.131,
  author =	{Klimann, Ines},
  title =	{{To Infinity and Beyond}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{131:1--131:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.131},
  URN =		{urn:nbn:de:0030-drops-91359},
  doi =		{10.4230/LIPIcs.ICALP.2018.131},
  annote =	{Keywords: automaton groups, growth of a group, exponential growth}
}
Document
On the Identity Problem for the Special Linear Group and the Heisenberg Group

Authors: Sang-Ki Ko, Reino Niskanen, and Igor Potapov


Abstract
We study the identity problem for matrices, i.e., whether the identity matrix is in a semigroup generated by a given set of generators. In particular we consider the identity problem for the special linear group following recent NP-completeness result for SL(2,Z) and the undecidability for SL(4,Z) generated by 48 matrices. First we show that there is no embedding from pairs of words into 3 x3 integer matrices with determinant one, i.e., into SL{(3,Z)} extending previously known result that there is no embedding into C^{2 x 2}. Apart from theoretical importance of the result it can be seen as a strong evidence that the computational problems in SL{(3,Z)} are decidable. The result excludes the most natural possibility of encoding the Post correspondence problem into SL{(3,Z)}, where the matrix products extended by the right multiplication correspond to the Turing machine simulation. Then we show that the identity problem is decidable in polynomial time for an important subgroup of SL(3,Z), the Heisenberg group H(3,Z). Furthermore, we extend the decidability result for H(n,Q) in any dimension n. Finally we are tightening the gap on decidability question for this long standing open problem by improving the undecidability result for the identity problem in SL{(4,Z)} substantially reducing the bound on the size of the generator set from 48 to 8 by developing a novel reduction technique.

Cite as

Sang-Ki Ko, Reino Niskanen, and Igor Potapov. On the Identity Problem for the Special Linear Group and the Heisenberg Group. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 132:1-132:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ko_et_al:LIPIcs.ICALP.2018.132,
  author =	{Ko, Sang-Ki and Niskanen, Reino and Potapov, Igor},
  title =	{{On the Identity Problem for the Special Linear Group and the Heisenberg Group}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{132:1--132:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.132},
  URN =		{urn:nbn:de:0030-drops-91367},
  doi =		{10.4230/LIPIcs.ICALP.2018.132},
  annote =	{Keywords: matrix semigroup, identity problem, special linear group, Heisenberg group, decidability}
}
Document
Gaifman Normal Forms for Counting Extensions of First-Order Logic

Authors: Dietrich Kuske and Nicole Schweikardt


Abstract
We consider the extension of first-order logic FO by unary counting quantifiers and generalise the notion of Gaifman normal form from FO to this setting. For formulas that use only ultimately periodic counting quantifiers, we provide an algorithm that computes equivalent formulas in Gaifman normal form. We also show that this is not possible for formulas using at least one quantifier that is not ultimately periodic. Now let d be a degree bound. We show that for any formula phi with arbitrary counting quantifiers, there is a formula gamma in Gaifman normal form that is equivalent to phi on all finite structures of degree <= d. If the quantifiers of phi are decidable (decidable in elementary time, ultimately periodic), gamma can be constructed effectively (in elementary time, in worst-case optimal 3-fold exponential time). For the setting with unrestricted degree we show that by using our Gaifman normal form for formulas with only ultimately periodic counting quantifiers, a known fixed-parameter tractability result for FO on classes of structures of bounded local tree-width can be lifted to the extension of FO with ultimately periodic counting quantifiers (a logic equally expressive as FO+MOD, i.e., first-oder logic with modulo-counting quantifiers).

Cite as

Dietrich Kuske and Nicole Schweikardt. Gaifman Normal Forms for Counting Extensions of First-Order Logic. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 133:1-133:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kuske_et_al:LIPIcs.ICALP.2018.133,
  author =	{Kuske, Dietrich and Schweikardt, Nicole},
  title =	{{Gaifman Normal Forms for Counting Extensions of First-Order Logic}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{133:1--133:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.133},
  URN =		{urn:nbn:de:0030-drops-91375},
  doi =		{10.4230/LIPIcs.ICALP.2018.133},
  annote =	{Keywords: Finite model theory, Gaifman locality, modulo-counting quantifiers, fixed parameter tractable model-checking}
}
Document
Polynomial Vector Addition Systems With States

Authors: Jérôme Leroux


Abstract
The reachability problem for vector addition systems is one of the most difficult and central problems in theoretical computer science. The problem is known to be decidable, but despite intense investigation during the last four decades, the exact complexity is still open. For some sub-classes, the complexity of the reachability problem is known. Structurally bounded vector addition systems, the class of vector addition systems with finite reachability sets from any initial configuration, is one of those classes. In fact, the reachability problem was shown to be polynomial-space complete for that class by Praveen and Lodaya in 2008. Surprisingly, extending this property to vector addition systems with states is open. In fact, there exist vector addition systems with states that are structurally bounded but with Ackermannian large sets of reachable configurations. It follows that the reachability problem for that class is between exponential space and Ackermannian. In this paper we introduce the class of polynomial vector addition systems with states, defined as the class of vector addition systems with states with size of reachable configurations bounded polynomially in the size of the initial ones. We prove that the reachability problem for polynomial vector addition systems is exponential-space complete. Additionally, we show that we can decide in polynomial time if a vector addition system with states is polynomial. This characterization introduces the notion of iteration scheme with potential applications to the reachability problem for general vector addition systems.

Cite as

Jérôme Leroux. Polynomial Vector Addition Systems With States. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 134:1-134:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{leroux:LIPIcs.ICALP.2018.134,
  author =	{Leroux, J\'{e}r\^{o}me},
  title =	{{Polynomial Vector Addition Systems With States}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{134:1--134:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.134},
  URN =		{urn:nbn:de:0030-drops-91387},
  doi =		{10.4230/LIPIcs.ICALP.2018.134},
  annote =	{Keywords: Vector additions system with states, Reachability problem, Formal verification, Infinite state system, Linear algebra, Kosaraju-Sullivan algorithm}
}
Document
Reducing CMSO Model Checking to Highly Connected Graphs

Authors: Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh, and Meirav Zehavi


Abstract
Given a Counting Monadic Second Order (CMSO) sentence psi, the CMSO[psi] problem is defined as follows. The input to CMSO[psi] is a graph G, and the objective is to determine whether G |= psi. Our main theorem states that for every CMSO sentence psi, if CMSO[psi] is solvable in polynomial time on "globally highly connected graphs", then CMSO[psi] is solvable in polynomial time (on general graphs). We demonstrate the utility of our theorem in the design of parameterized algorithms. Specifically we show that technical problem-specific ingredients of a powerful method for designing parameterized algorithms, recursive understanding, can be replaced by a black-box invocation of our main theorem. We also show that our theorem can be easily deployed to show fixed parameterized tractability of a wide range of problems, where the input is a graph G and the task is to find a connected induced subgraph of G such that "few" vertices in this subgraph have neighbors outside the subgraph, and additionally the subgraph has a CMSO-definable property.

Cite as

Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh, and Meirav Zehavi. Reducing CMSO Model Checking to Highly Connected Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 135:1-135:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.ICALP.2018.135,
  author =	{Lokshtanov, Daniel and Ramanujan, M. S. and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Reducing CMSO Model Checking to Highly Connected Graphs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{135:1--135:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.135},
  URN =		{urn:nbn:de:0030-drops-91391},
  doi =		{10.4230/LIPIcs.ICALP.2018.135},
  annote =	{Keywords: Fixed Parameter Tractability Model Checking Recursive Understanding}
}
Document
An Optimal Bound on the Solution Sets of One-Variable Word Equations and its Consequences

Authors: Dirk Nowotka and Aleksi Saarela


Abstract
We solve two long-standing open problems on word equations. Firstly, we prove that a one-variable word equation with constants has either at most three or an infinite number of solutions. The existence of such a bound had been conjectured, and the bound three is optimal. Secondly, we consider independent systems of three-variable word equations without constants. If such a system has a nonperiodic solution, then this system of equations is at most of size 17. Although probably not optimal, this is the first finite bound found. However, the conjecture of that bound being actually two still remains open.

Cite as

Dirk Nowotka and Aleksi Saarela. An Optimal Bound on the Solution Sets of One-Variable Word Equations and its Consequences. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 136:1-136:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nowotka_et_al:LIPIcs.ICALP.2018.136,
  author =	{Nowotka, Dirk and Saarela, Aleksi},
  title =	{{An Optimal Bound on the Solution Sets of One-Variable Word Equations and its Consequences}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{136:1--136:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.136},
  URN =		{urn:nbn:de:0030-drops-91404},
  doi =		{10.4230/LIPIcs.ICALP.2018.136},
  annote =	{Keywords: combinatorics on words, word equations, systems of equations}
}
Document
Separating Without Any Ambiguity

Authors: Thomas Place and Marc Zeitoun


Abstract
We investigate a standard operator on classes of languages: unambiguous polynomial closure. We show that if C is a class of regular languages having some mild properties, the membership problem for its unambiguous polynomial closure UPol(C) reduces to the same problem for C. We give a new, self-contained and elementary proof of this result. We also show that unambiguous polynomial closure coincides with alternating left and right deterministic closure. Finally, if additionally C is finite, we show that the separation and covering problems are decidable for UPol(C).

Cite as

Thomas Place and Marc Zeitoun. Separating Without Any Ambiguity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 137:1-137:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{place_et_al:LIPIcs.ICALP.2018.137,
  author =	{Place, Thomas and Zeitoun, Marc},
  title =	{{Separating Without Any Ambiguity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{137:1--137:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.137},
  URN =		{urn:nbn:de:0030-drops-91419},
  doi =		{10.4230/LIPIcs.ICALP.2018.137},
  annote =	{Keywords: Regular languages, separation problem, decidable characterizations}
}
Document
A Superpolynomial Lower Bound for the Size of Non-Deterministic Complement of an Unambiguous Automaton

Authors: Mikhail Raskin


Abstract
Unambiguous non-deterministic finite automata (UFA) are non-deterministic automata (over finite words) such that there is at most one accepting run over each input. Such automata are known to be potentially exponentially more succinct than deterministic automata, and non-deterministic automata can be exponentially more succinct than them. In this paper we establish a superpolynomial lower bound for the state complexity of the translation of an UFA to a non-deterministic automaton for the complement language. This disproves the formerly conjectured polynomial upper bound for this translation. This lower bound only involves a one letter alphabet, and makes use of the random graph methods. The same proof also shows that the translation of sweeping automata to non-deterministic automata is superpolynomial.

Cite as

Mikhail Raskin. A Superpolynomial Lower Bound for the Size of Non-Deterministic Complement of an Unambiguous Automaton. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 138:1-138:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{raskin:LIPIcs.ICALP.2018.138,
  author =	{Raskin, Mikhail},
  title =	{{A Superpolynomial Lower Bound for the Size of Non-Deterministic Complement of an Unambiguous Automaton}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{138:1--138:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.138},
  URN =		{urn:nbn:de:0030-drops-91428},
  doi =		{10.4230/LIPIcs.ICALP.2018.138},
  annote =	{Keywords: unambiguous automata, language complement, lower bound}
}
Document
The Isomorphism Problem for Finite Extensions of Free Groups Is In PSPACE

Authors: Géraud Sénizergues and Armin Weiß


Abstract
We present an algorithm for the following problem: given a context-free grammar for the word problem of a virtually free group G, compute a finite graph of groups G with finite vertex groups and fundamental group G. Our algorithm is non-deterministic and runs in doubly exponential time. It follows that the isomorphism problem of context-free groups can be solved in doubly exponential space. Moreover, if, instead of a grammar, a finite extension of a free group is given as input, the construction of the graph of groups is in NP and, consequently, the isomorphism problem in PSPACE.

Cite as

Géraud Sénizergues and Armin Weiß. The Isomorphism Problem for Finite Extensions of Free Groups Is In PSPACE. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 139:1-139:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{senizergues_et_al:LIPIcs.ICALP.2018.139,
  author =	{S\'{e}nizergues, G\'{e}raud and Wei{\ss}, Armin},
  title =	{{The Isomorphism Problem for Finite Extensions of Free Groups Is In PSPACE}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{139:1--139:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.139},
  URN =		{urn:nbn:de:0030-drops-91437},
  doi =		{10.4230/LIPIcs.ICALP.2018.139},
  annote =	{Keywords: virtually free groups, context-free groups, isomorphism problem, structure tree, graph of groups}
}
Document
Unambiguous Languages Exhaust the Index Hierarchy

Authors: Michal Skrzypczak


Abstract
This work is a study of the expressive power of unambiguity in the case of automata over infinite trees. An automaton is called unambiguous if it has at most one accepting run on every input, the language of such an automaton is called an unambiguous language. It is known that not every regular language of infinite trees is unambiguous. Except that, very little is known about which regular tree languages are unambiguous. This paper answers the question whether unambiguous languages are of bounded complexity among all regular tree languages. The notion of complexity is the canonical one, called the (parity or Rabin/Mostowski) index hierarchy. The answer is negative, as exhibited by a family of examples of unambiguous languages the cannot be recognised by any alternating parity tree automata of bounded range of priorities. Hardness of the examples is based on the theory of signatures, previously studied by Walukiewicz. The technical core of the article is a definition of the canonical signatures together with a parity game that compares signatures of a given pair of parity games (of the same index).

Cite as

Michal Skrzypczak. Unambiguous Languages Exhaust the Index Hierarchy. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 140:1-140:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{skrzypczak:LIPIcs.ICALP.2018.140,
  author =	{Skrzypczak, Michal},
  title =	{{Unambiguous Languages Exhaust the Index Hierarchy}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{140:1--140:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.140},
  URN =		{urn:nbn:de:0030-drops-91442},
  doi =		{10.4230/LIPIcs.ICALP.2018.140},
  annote =	{Keywords: unambiguous automata, parity games, infinite trees, index hierarchy}
}
Document
The Beta-Bernoulli process and algebraic effects

Authors: Sam Staton, Dario Stein, Hongseok Yang, Nathanael L. Ackerman, Cameron E. Freer, and Daniel M. Roy


Abstract
In this paper we use the framework of algebraic effects from programming language theory to analyze the Beta-Bernoulli process, a standard building block in Bayesian models. Our analysis reveals the importance of abstract data types, and two types of program equations, called commutativity and discardability. We develop an equational theory of terms that use the Beta-Bernoulli process, and show that the theory is complete with respect to the measure-theoretic semantics, and also in the syntactic sense of Post. Our analysis has a potential for being generalized to other stochastic processes relevant to Bayesian modelling, yielding new understanding of these processes from the perspective of programming.

Cite as

Sam Staton, Dario Stein, Hongseok Yang, Nathanael L. Ackerman, Cameron E. Freer, and Daniel M. Roy. The Beta-Bernoulli process and algebraic effects. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 141:1-141:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{staton_et_al:LIPIcs.ICALP.2018.141,
  author =	{Staton, Sam and Stein, Dario and Yang, Hongseok and Ackerman, Nathanael L. and Freer, Cameron E. and Roy, Daniel M.},
  title =	{{The Beta-Bernoulli process and algebraic effects}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{141:1--141:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.141},
  URN =		{urn:nbn:de:0030-drops-91456},
  doi =		{10.4230/LIPIcs.ICALP.2018.141},
  annote =	{Keywords: Beta-Bernoulli process, Algebraic effects, Probabilistic programming, Exchangeability}
}
Document
Uniformization Problems for Synchronizations of Automatic Relations on Words

Authors: Sarah Winter


Abstract
A uniformization of a binary relation is a function that is contained in the relation and has the same domain as the relation. The synthesis problem asks for effective uniformization for classes of relations and functions that can be implemented in a specific way. We consider the synthesis problem for automatic relations over finite words (also called regular or synchronized rational relations) by functions implemented by specific classes of sequential transducers. It is known that the problem "Given an automatic relation, does it have a uniformization by a subsequential transducer?" is decidable in the two variants where the uniformization can either be implemented by an arbitrary subsequential transducer or it has to be implemented by a synchronous transducer. We introduce a new variant of this problem in which the allowed input/output behavior of the subsequential transducer is specified by a set of synchronizations and prove decidability for a specific class of synchronizations.

Cite as

Sarah Winter. Uniformization Problems for Synchronizations of Automatic Relations on Words. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 142:1-142:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{winter:LIPIcs.ICALP.2018.142,
  author =	{Winter, Sarah},
  title =	{{Uniformization Problems for Synchronizations of Automatic Relations on Words}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{142:1--142:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.142},
  URN =		{urn:nbn:de:0030-drops-91463},
  doi =		{10.4230/LIPIcs.ICALP.2018.142},
  annote =	{Keywords: automatic relation, uniformization, synchronization, transducer}
}
Document
Congestion-Free Rerouting of Flows on DAGs

Authors: Saeed Akhoondian Amiri, Szymon Dudycz, Stefan Schmid, and Sebastian Wiederrecht


Abstract
Changing a given configuration in a graph into another one is known as a reconfiguration problem. Such problems have recently received much interest in the context of algorithmic graph theory. We initiate the theoretical study of the following reconfiguration problem: How to reroute k unsplittable flows of a certain demand in a capacitated network from their current paths to their respective new paths, in a congestion-free manner? This problem finds immediate applications, e.g., in traffic engineering in computer networks. We show that the problem is generally NP-hard already for k=2 flows, which motivates us to study rerouting on a most basic class of flow graphs, namely DAGs. Interestingly, we find that for general k, deciding whether an unsplittable multi-commodity flow rerouting schedule exists, is NP-hard even on DAGs. Our main contribution is a polynomial-time (fixed parameter tractable) algorithm to solve the route update problem for a bounded number of flows on DAGs. At the heart of our algorithm lies a novel decomposition of the flow network that allows us to express and resolve reconfiguration dependencies among flows.

Cite as

Saeed Akhoondian Amiri, Szymon Dudycz, Stefan Schmid, and Sebastian Wiederrecht. Congestion-Free Rerouting of Flows on DAGs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 143:1-143:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{akhoondianamiri_et_al:LIPIcs.ICALP.2018.143,
  author =	{Akhoondian Amiri, Saeed and Dudycz, Szymon and Schmid, Stefan and Wiederrecht, Sebastian},
  title =	{{Congestion-Free Rerouting of Flows on DAGs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{143:1--143:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.143},
  URN =		{urn:nbn:de:0030-drops-91471},
  doi =		{10.4230/LIPIcs.ICALP.2018.143},
  annote =	{Keywords: Unsplittable Flows, Reconfiguration, DAGs, FPT, NP-Hardness}
}
Document
Practical and Provably Secure Onion Routing

Authors: Megumi Ando, Anna Lysyanskaya, and Eli Upfal


Abstract
In an onion routing protocol, messages travel through several intermediaries before arriving at their destinations; they are wrapped in layers of encryption (hence they are called "onions"). The goal is to make it hard to establish who sent the message. It is a practical and widespread tool for creating anonymous channels. For the standard adversary models - passive and active - we present practical and provably secure onion routing protocols. Akin to Tor, in our protocols each party independently chooses the routing paths for his onions. For security parameter lambda, our differentially private solution for the active adversary takes O(log^2 lambda) rounds and requires every participant to transmit O(log^{4} lambda) onions in every round.

Cite as

Megumi Ando, Anna Lysyanskaya, and Eli Upfal. Practical and Provably Secure Onion Routing. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 144:1-144:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ando_et_al:LIPIcs.ICALP.2018.144,
  author =	{Ando, Megumi and Lysyanskaya, Anna and Upfal, Eli},
  title =	{{Practical and Provably Secure Onion Routing}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{144:1--144:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.144},
  URN =		{urn:nbn:de:0030-drops-91482},
  doi =		{10.4230/LIPIcs.ICALP.2018.144},
  annote =	{Keywords: Anonymity, traffic analysis, statistical privacy, differential privacy}
}
Document
Resolving SINR Queries in a Dynamic Setting

Authors: Boris Aronov, Gali Bar-On, and Matthew J. Katz


Abstract
We consider a set of transmitters broadcasting simultaneously on the same frequency under the SINR model. Transmission power may vary from one transmitter to another, and a signal's strength decreases (path loss or path attenuation) by some constant power alpha of the distance traveled. Roughly, a receiver at a given location can hear a specific transmitter only if the transmitter's signal is stronger than the signal of all other transmitters, combined. An SINR query is to determine whether a receiver at a given location can hear any transmitter, and if yes, which one. An approximate answer to an SINR query is such that one gets a definite yes or definite no, when the ratio between the strongest signal and all other signals combined is well above or well below the reception threshold, while the answer in the intermediate range is allowed to be either yes or no. We describe several compact data structures that support approximate SINR queries in the plane in a dynamic context, i.e., where both queries and updates (insertion or deletion of a transmitter) can be performed efficiently.

Cite as

Boris Aronov, Gali Bar-On, and Matthew J. Katz. Resolving SINR Queries in a Dynamic Setting. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 145:1-145:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aronov_et_al:LIPIcs.ICALP.2018.145,
  author =	{Aronov, Boris and Bar-On, Gali and Katz, Matthew J.},
  title =	{{Resolving SINR Queries in a Dynamic Setting}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{145:1--145:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.145},
  URN =		{urn:nbn:de:0030-drops-91495},
  doi =		{10.4230/LIPIcs.ICALP.2018.145},
  annote =	{Keywords: Wireless networks, SINR, dynamic insertion and deletion, interference cancellation, range searching}
}
Document
Uniform Mixed Equilibria in Network Congestion Games with Link Failures

Authors: Vittorio Bilò, Luca Moscardelli, and Cosimo Vinci


Abstract
Motivated by possible applications in fault-tolerant routing, we introduce the notion of uniform mixed equilibria in network congestion games with adversarial link failures, where players need to route traffic from a source to a destination node. Given an integer rho >= 1, a rho-uniform mixed strategy is a mixed strategy in which a player plays exactly rho edge disjoint paths with uniform probabilities, so that a rho-uniform mixed equilibrium is a tuple of rho-uniform mixed strategies, one for each player, in which no player can lower her cost by deviating to another rho-uniform mixed strategy. For games with weighted players and affine latency functions, we show existence of rho-uniform mixed equilibria and provide a tight characterization of their price of anarchy. For games with unweighted players, instead, we extend the existential guarantee to any class of latency functions and, restricted to games with affine latencies, we derive a tight characterization of both the prices of anarchy and stability.

Cite as

Vittorio Bilò, Luca Moscardelli, and Cosimo Vinci. Uniform Mixed Equilibria in Network Congestion Games with Link Failures. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 146:1-146:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ICALP.2018.146,
  author =	{Bil\`{o}, Vittorio and Moscardelli, Luca and Vinci, Cosimo},
  title =	{{Uniform Mixed Equilibria in Network Congestion Games with Link Failures}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{146:1--146:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.146},
  URN =		{urn:nbn:de:0030-drops-91508},
  doi =		{10.4230/LIPIcs.ICALP.2018.146},
  annote =	{Keywords: Network Congestion Games, Fault-Tolerant Routing, Nash Equilibria, Price of Anarchy, Price of Stability}
}
Document
Byzantine Gathering in Polynomial Time

Authors: Sébastien Bouchard, Yoann Dieudonné, and Anissa Lamani


Abstract
Gathering a group of mobile agents is a fundamental task in the field of distributed and mobile systems. This can be made drastically more difficult to achieve when some agents are subject to faults, especially the Byzantine ones that are known as being the worst faults to handle. In this paper we study, from a deterministic point of view, the task of Byzantine gathering in a network modeled as a graph. In other words, despite the presence of Byzantine agents, all the other (good) agents, starting from {possibly} different nodes and applying the same deterministic algorithm, have to meet at the same node in finite time and stop moving. An adversary chooses the initial nodes of the agents (the number of agents may be larger than the number of nodes) and assigns a different positive integer (called label) to each of them. Initially, each agent knows its label. The agents move in synchronous rounds and can communicate with each other only when located at the same node. Within the team, f of the agents are Byzantine. A Byzantine agent acts in an unpredictable and arbitrary way. For example, it can choose an arbitrary port when it moves, can convey arbitrary information to other agents and can change its label in every round, in particular by forging the label of another agent or by creating a completely new one. Besides its label, which corresponds to a local knowledge, an agent is assigned some global knowledge denoted by GK that is common to all agents. In literature, the Byzantine gathering problem has been analyzed in arbitrary n-node graphs by considering the scenario when GK=(n,f) and the scenario when GK=f. In the first (resp. second) scenario, it has been shown that the minimum number of good agents guaranteeing deterministic gathering of all of them is f+1 (resp. f+2). However, for both these scenarios, all the existing deterministic algorithms, whether or not they are optimal in terms of required number of good agents, have the major disadvantage of having a time complexity that is exponential in n and L, where L is the value of the largest label belonging to a good agent. In this paper, we seek to design a deterministic solution for Byzantine gathering that makes a concession on the proportion of Byzantine agents within the team, but that offers a significantly lower complexity. We also seek to use a global knowledge whose the length of the binary representation (that we also call size) is small. In this respect, assuming that the agents are in a strong team i.e., a team in which the number of good agents is at least some prescribed value that is quadratic in f, we give positive and negative results. On the positive side, we show an algorithm that solves Byzantine gathering with all strong teams in all graphs of size at most n, for any integers n and f, in a time polynomial in n and the length |l_{min}| of the binary representation of the smallest label of a good agent. The algorithm works using a global knowledge of size O(log log log n), which is of optimal order of magnitude in our context to reach a time complexity that is polynomial in n and |l_{min}|. Indeed, on the negative side, we show that there is no deterministic algorithm solving Byzantine gathering with all strong teams, in all graphs of size at most n, in a time polynomial in n and |l_{min}| and using a global knowledge of size o(log log log n).

Cite as

Sébastien Bouchard, Yoann Dieudonné, and Anissa Lamani. Byzantine Gathering in Polynomial Time. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 147:1-147:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bouchard_et_al:LIPIcs.ICALP.2018.147,
  author =	{Bouchard, S\'{e}bastien and Dieudonn\'{e}, Yoann and Lamani, Anissa},
  title =	{{Byzantine Gathering in Polynomial Time}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{147:1--147:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.147},
  URN =		{urn:nbn:de:0030-drops-91511},
  doi =		{10.4230/LIPIcs.ICALP.2018.147},
  annote =	{Keywords: gathering, deterministic algorithm, mobile agent, Byzantine fault, polynomial time}
}
Document
Temporal Vertex Cover with a Sliding Time Window

Authors: Eleni C. Akrida, George B. Mertzios, Paul G. Spirakis, and Viktor Zamaraev


Abstract
Modern, inherently dynamic systems are usually characterized by a network structure, i.e. an underlying graph topology, which is subject to discrete changes over time. Given a static underlying graph G, a temporal graph can be represented via an assignment of a set of integer time-labels to every edge of G, indicating the discrete time steps when this edge is active. While most of the recent theoretical research on temporal graphs has focused on the notion of a temporal path and other "path-related" temporal notions, only few attempts have been made to investigate "non-path" temporal graph problems. In this paper, motivated by applications in sensor and in transportation networks, we introduce and study two natural temporal extensions of the classical problem Vertex Cover. In our first problem, Temporal Vertex Cover, the aim is to cover every edge at least once during the lifetime of the temporal graph, where an edge can only be covered by one of its endpoints at a time step when it is active. In our second, more pragmatic variation Sliding Window Temporal Vertex Cover, we are also given a natural number Delta, and our aim is to cover every edge at least once at every Delta consecutive time steps. In both cases we wish to minimize the total number of "vertex appearances" that are needed to cover the whole graph. We present a thorough investigation of the computational complexity and approximability of these two temporal covering problems. In particular, we provide strong hardness results, complemented by various approximation and exact algorithms. Some of our algorithms are polynomial-time, while others are asymptotically almost optimal under the Exponential Time Hypothesis (ETH) and other plausible complexity assumptions.

Cite as

Eleni C. Akrida, George B. Mertzios, Paul G. Spirakis, and Viktor Zamaraev. Temporal Vertex Cover with a Sliding Time Window. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 148:1-148:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{akrida_et_al:LIPIcs.ICALP.2018.148,
  author =	{Akrida, Eleni C. and Mertzios, George B. and Spirakis, Paul G. and Zamaraev, Viktor},
  title =	{{Temporal Vertex Cover with a Sliding Time Window}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{148:1--148:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.148},
  URN =		{urn:nbn:de:0030-drops-91522},
  doi =		{10.4230/LIPIcs.ICALP.2018.148},
  annote =	{Keywords: Temporal networks, temporal vertex cover, APX-hard, approximation algorithm, Exponential Time Hypothesis}
}
Document
On the Complexity of Sampling Vertices Uniformly from a Graph

Authors: Flavio Chierichetti and Shahrzad Haddadan


Abstract
We study a number of graph exploration problems in the following natural scenario: an algorithm starts exploring an undirected graph from some seed vertex; the algorithm, for an arbitrary vertex v that it is aware of, can ask an oracle to return the set of the neighbors of v. (In the case of social networks, a call to this oracle corresponds to downloading the profile page of user v.) The goal of the algorithm is to either learn something (e.g., average degree) about the graph, or to return some random function of the graph (e.g., a uniform-at-random vertex), while accessing/downloading as few vertices of the graph as possible. Motivated by practical applications, we study the complexities of a variety of problems in terms of the graph's mixing time t_{mix} and average degree d_{avg} - two measures that are believed to be quite small in real-world social networks, and that have often been used in the applied literature to bound the performance of online exploration algorithms. Our main result is that the algorithm has to access Omega (t_{mix} d_{avg} epsilon^{-2} ln delta^{-1}) vertices to obtain, with probability at least 1-delta, an epsilon additive approximation of the average of a bounded function on the vertices of a graph - this lower bound matches the performance of an algorithm that was proposed in the literature. We also give tight bounds for the problem of returning a close-to-uniform-at-random vertex from the graph. Finally, we give lower bounds for the problems of estimating the average degree of the graph, and the number of vertices of the graph.

Cite as

Flavio Chierichetti and Shahrzad Haddadan. On the Complexity of Sampling Vertices Uniformly from a Graph. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 149:1-149:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chierichetti_et_al:LIPIcs.ICALP.2018.149,
  author =	{Chierichetti, Flavio and Haddadan, Shahrzad},
  title =	{{On the Complexity of Sampling Vertices Uniformly from a Graph}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{149:1--149:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.149},
  URN =		{urn:nbn:de:0030-drops-91538},
  doi =		{10.4230/LIPIcs.ICALP.2018.149},
  annote =	{Keywords: Social Networks, Sampling, Graph Exploration, Lower Bounds}
}
Document
The Price of Stability of Weighted Congestion Games

Authors: George Christodoulou, Martin Gairing, Yiannis Giannakopoulos, and Paul G. Spirakis


Abstract
We give exponential lower bounds on the Price of Stability (PoS) of weighted congestion games with polynomial cost functions. In particular, for any positive integer d we construct rather simple games with cost functions of degree at most d which have a PoS of at least Omega(Phi_d)^{d+1}, where Phi_d ~ d/ln d is the unique positive root of equation x^{d+1}=(x+1)^d. This essentially closes the huge gap between Theta(d) and Phi_d^{d+1} and asymptotically matches the Price of Anarchy upper bound. We further show that the PoS remains exponential even for singleton games. More generally, we also provide a lower bound of Omega((1+1/alpha)^d/d) on the PoS of alpha-approximate Nash equilibria, even for singleton games. All our lower bounds extend to network congestion games, and hold for mixed and correlated equilibria as well. On the positive side, we give a general upper bound on the PoS of alpha-approximate Nash equilibria, which is sensitive to the range W of the player weights and the approximation parameter alpha. We do this by explicitly constructing a novel approximate potential function, based on Faulhaber's formula, that generalizes Rosenthal's potential in a continuous, analytic way. From the general theorem, we deduce two interesting corollaries. First, we derive the existence of an approximate pure Nash equilibrium with PoS at most (d+3)/2; the equilibrium's approximation parameter ranges from Theta(1) to d+1 in a smooth way with respect to W. Secondly, we show that for unweighted congestion games, the PoS of alpha-approximate Nash equilibria is at most (d+1)/alpha.

Cite as

George Christodoulou, Martin Gairing, Yiannis Giannakopoulos, and Paul G. Spirakis. The Price of Stability of Weighted Congestion Games. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 150:1-150:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{christodoulou_et_al:LIPIcs.ICALP.2018.150,
  author =	{Christodoulou, George and Gairing, Martin and Giannakopoulos, Yiannis and Spirakis, Paul G.},
  title =	{{The Price of Stability of Weighted Congestion Games}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{150:1--150:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.150},
  URN =		{urn:nbn:de:0030-drops-91541},
  doi =		{10.4230/LIPIcs.ICALP.2018.150},
  annote =	{Keywords: Congestion games, price of stability, Nash equilibrium, approximate equilibrium, potential games}
}
Document
Demand-Independent Optimal Tolls

Authors: Riccardo Colini-Baldeschi, Max Klimm, and Marco Scarsini


Abstract
Wardrop equilibria in nonatomic congestion games are in general inefficient as they do not induce an optimal flow that minimizes the total travel time. Network tolls are a prominent and popular way to induce an optimum flow in equilibrium. The classical approach to find such tolls is marginal cost pricing which requires the exact knowledge of the demand on the network. In this paper, we investigate under which conditions demand-independent optimum tolls exist that induce the system optimum flow for any travel demand in the network. We give several characterizations for the existence of such tolls both in terms of the cost structure and the network structure of the game. Specifically we show that demand-independent optimum tolls exist if and only if the edge cost functions are shifted monomials as used by the Bureau of Public Roads. Moreover, non-negative demand-independent optimum tolls exist when the network is a directed acyclic multi-graph. Finally, we show that any network with a single origin-destination pair admits demand-independent optimum tolls that, although not necessarily non-negative, satisfy a budget constraint.

Cite as

Riccardo Colini-Baldeschi, Max Klimm, and Marco Scarsini. Demand-Independent Optimal Tolls. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 151:1-151:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{colinibaldeschi_et_al:LIPIcs.ICALP.2018.151,
  author =	{Colini-Baldeschi, Riccardo and Klimm, Max and Scarsini, Marco},
  title =	{{Demand-Independent Optimal Tolls}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{151:1--151:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.151},
  URN =		{urn:nbn:de:0030-drops-91553},
  doi =		{10.4230/LIPIcs.ICALP.2018.151},
  annote =	{Keywords: nonatomic congestion games, efficiency of equlibria, tolls}
}
Document
Greedy Algorithms for Online Survivable Network Design

Authors: Sina Dehghani, Soheil Ehsani, MohammadTaghi Hajiaghayi, Vahid Liaghat, and Saeed Seddighin


Abstract
In an instance of the network design problem, we are given a graph G=(V,E), an edge-cost function c:E -> R^{>= 0}, and a connectivity criterion. The goal is to find a minimum-cost subgraph H of G that meets the connectivity requirements. An important family of this class is the survivable network design problem (SNDP): given non-negative integers r_{u v} for each pair u,v in V, the solution subgraph H should contain r_{u v} edge-disjoint paths for each pair u and v. While this problem is known to admit good approximation algorithms in the offline case, the problem is much harder in the online setting. Gupta, Krishnaswamy, and Ravi [Gupta et al., 2012] (STOC'09) are the first to consider the online survivable network design problem. They demonstrate an algorithm with competitive ratio of O(k log^3 n), where k=max_{u,v} r_{u v}. Note that the competitive ratio of the algorithm by Gupta et al. grows linearly in k. Since then, an important open problem in the online community [Naor et al., 2011; Gupta et al., 2012] is whether the linear dependence on k can be reduced to a logarithmic dependency. Consider an online greedy algorithm that connects every demand by adding a minimum cost set of edges to H. Surprisingly, we show that this greedy algorithm significantly improves the competitive ratio when a congestion of 2 is allowed on the edges or when the model is stochastic. While our algorithm is fairly simple, our analysis requires a deep understanding of k-connected graphs. In particular, we prove that the greedy algorithm is O(log^2 n log k)-competitive if one satisfies every demand between u and v by r_{uv}/2 edge-disjoint paths. The spirit of our result is similar to the work of Chuzhoy and Li [Chuzhoy and Li, 2012] (FOCS'12), in which the authors give a polylogarithmic approximation algorithm for edge-disjoint paths with congestion 2. Moreover, we study the greedy algorithm in the online stochastic setting. We consider the i.i.d. model, where each online demand is drawn from a single probability distribution, the unknown i.i.d. model, where every demand is drawn from a single but unknown probability distribution, and the prophet model in which online demands are drawn from (possibly) different probability distributions. Through a different analysis, we prove that a similar greedy algorithm is constant competitive for the i.i.d. and the prophet models. Also, the greedy algorithm is O(log n)-competitive for the unknown i.i.d. model, which is almost tight due to the lower bound of [Garg et al., 2008] for single connectivity.

Cite as

Sina Dehghani, Soheil Ehsani, MohammadTaghi Hajiaghayi, Vahid Liaghat, and Saeed Seddighin. Greedy Algorithms for Online Survivable Network Design. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 152:1-152:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dehghani_et_al:LIPIcs.ICALP.2018.152,
  author =	{Dehghani, Sina and Ehsani, Soheil and Hajiaghayi, MohammadTaghi and Liaghat, Vahid and Seddighin, Saeed},
  title =	{{Greedy Algorithms for Online Survivable Network Design}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{152:1--152:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.152},
  URN =		{urn:nbn:de:0030-drops-91569},
  doi =		{10.4230/LIPIcs.ICALP.2018.152},
  annote =	{Keywords: survivable network design, online, greedy}
}
Document
Algorithms for Noisy Broadcast with Erasures

Authors: Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty


Abstract
The noisy broadcast model was first studied by [Gallager, 1988] where an n-character input is distributed among n processors, so that each processor receives one input bit. Computation proceeds in rounds, where in each round each processor broadcasts a single character, and each reception is corrupted independently at random with some probability p. [Gallager, 1988] gave an algorithm for all processors to learn the input in O(log log n) rounds with high probability. Later, a matching lower bound of Omega(log log n) was given by [Goyal et al., 2008]. We study a relaxed version of this model where each reception is erased and replaced with a `?' independently with probability p, so the processors have knowledge of whether a bit has been corrupted. In this relaxed model, we break past the lower bound of [Goyal et al., 2008] and obtain an O(log^* n)-round algorithm for all processors to learn the input with high probability. We also show an O(1)-round algorithm for the same problem when the alphabet size is Omega(poly(n)).

Cite as

Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty. Algorithms for Noisy Broadcast with Erasures. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 153:1-153:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ICALP.2018.153,
  author =	{Grossman, Ofer and Haeupler, Bernhard and Mohanty, Sidhanth},
  title =	{{Algorithms for Noisy Broadcast with Erasures}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{153:1--153:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.153},
  URN =		{urn:nbn:de:0030-drops-91576},
  doi =		{10.4230/LIPIcs.ICALP.2018.153},
  annote =	{Keywords: noisy broadcast, error correction, erasures, distributed computing with noise}
}
Document
Efficient Black-Box Reductions for Separable Cost Sharing

Authors: Tobias Harks, Martin Hoefer, Anja Huber, and Manuel Surek


Abstract
In cost sharing games with delays, a set of agents jointly uses a finite subset of resources. Each resource has a fixed cost that has to be shared by the players, and each agent has a non-shareable player-specific delay for each resource. A prominent example is uncapacitated facility location (UFL), where facilities need to be opened (at a shareable cost) and clients want to connect to opened facilities. Each client pays a cost share and his non-shareable physical connection cost. Given any profile of subsets used by the agents, a separable cost sharing protocol determines cost shares that satisfy budget balance on every resource and separability over the resources. Moreover, a separable protocol guarantees existence of pure Nash equilibria in the induced strategic game for the agents. In this paper, we study separable cost sharing protocols in several general combinatorial domains. We provide black-box reductions to reduce the design of a separable cost sharing protocol to the design of an approximation algorithm for the underlying cost minimization problem. In this way, we obtain new separable cost sharing protocols in games based on arbitrary player-specific matroids, single-source connection games without delays, and connection games on n-series-parallel graphs with delays. All these reductions are efficiently computable - given an initial allocation profile, we obtain a profile of no larger cost and separable cost shares turning the profile into a pure Nash equilibrium. Hence, in these domains any approximation algorithm can be used to obtain a separable cost sharing protocol with a price of stability bounded by the approximation factor.

Cite as

Tobias Harks, Martin Hoefer, Anja Huber, and Manuel Surek. Efficient Black-Box Reductions for Separable Cost Sharing. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 154:1-154:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{harks_et_al:LIPIcs.ICALP.2018.154,
  author =	{Harks, Tobias and Hoefer, Martin and Huber, Anja and Surek, Manuel},
  title =	{{Efficient Black-Box Reductions for Separable Cost Sharing}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{154:1--154:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.154},
  URN =		{urn:nbn:de:0030-drops-91587},
  doi =		{10.4230/LIPIcs.ICALP.2018.154},
  annote =	{Keywords: Cost Sharing, Price of Stability, Matroids, Connection Games}
}
Document
Price of Anarchy for Mechanisms with Risk-Averse Agents

Authors: Thomas Kesselheim and Bojana Kodric


Abstract
We study the price of anarchy of mechanisms in the presence of risk-averse agents. Previous work has focused on agents with quasilinear utilities, possibly with a budget. Our model subsumes this as a special case but also captures that agents might be less sensitive to payments than in the risk-neutral model. We show that many positive price-of-anarchy results proved in the smoothness framework continue to hold in the more general risk-averse setting. A sufficient condition is that agents can never end up with negative quasilinear utility after playing an undominated strategy. This is true, e.g., for first-price and second-price auctions. For all-pay auctions, similar results do not hold: We show that there are Bayes-Nash equilibria with arbitrarily bad social welfare compared to the optimum.

Cite as

Thomas Kesselheim and Bojana Kodric. Price of Anarchy for Mechanisms with Risk-Averse Agents. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 155:1-155:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kesselheim_et_al:LIPIcs.ICALP.2018.155,
  author =	{Kesselheim, Thomas and Kodric, Bojana},
  title =	{{Price of Anarchy for Mechanisms with Risk-Averse Agents}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{155:1--155:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.155},
  URN =		{urn:nbn:de:0030-drops-91599},
  doi =		{10.4230/LIPIcs.ICALP.2018.155},
  annote =	{Keywords: Mechanism Design, Price of Anarchy, Risk Aversion, Smoothness}
}
Document
Polynomial Counting in Anonymous Dynamic Networks with Applications to Anonymous Dynamic Algebraic Computations

Authors: Dariusz R. Kowalski and Miguel A. Mosteiro


Abstract
Starting with Michail, Chatzigiannakis, and Spirakis work [Michail et al., 2013], the problem of Counting the number of nodes in {Anonymous Dynamic Networks} has attracted a lot of attention. The problem is challenging because nodes are indistinguishable (they lack identifiers and execute the same program) and the topology may change arbitrarily from round to round of communication, as long as the network is connected in each round. The problem is central in distributed computing as the number of participants is frequently needed to make important decisions, such as termination, agreement, synchronization, and many others. A variety of algorithms built on top of mass-distribution techniques have been presented, analyzed, and also experimentally evaluated; some of them assumed additional knowledge of network characteristics, such as bounded degree or given upper bound on the network size. However, the question of whether Counting can be solved deterministically in sub-exponential time remained open. In this work, we answer this question positively by presenting Methodical Counting, which runs in polynomial time and requires no knowledge of network characteristics. Moreover, we also show how to extend Methodical Counting to compute the sum of input values and more complex functions without extra cost. Our analysis leverages previous work on random walks in evolving graphs, combined with carefully chosen alarms in the algorithm that control the process and its parameters. To the best of our knowledge, our Counting algorithm and its extensions to other algebraic and Boolean functions are the first that can be implemented in practice with worst-case guarantees.

Cite as

Dariusz R. Kowalski and Miguel A. Mosteiro. Polynomial Counting in Anonymous Dynamic Networks with Applications to Anonymous Dynamic Algebraic Computations. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 156:1-156:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kowalski_et_al:LIPIcs.ICALP.2018.156,
  author =	{Kowalski, Dariusz R. and Mosteiro, Miguel A.},
  title =	{{Polynomial Counting in Anonymous Dynamic Networks with Applications to Anonymous Dynamic Algebraic Computations}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{156:1--156:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.156},
  URN =		{urn:nbn:de:0030-drops-91602},
  doi =		{10.4230/LIPIcs.ICALP.2018.156},
  annote =	{Keywords: Anonymous Dynamic Networks, Counting, Boolean functions, distributed algorithms, deterministic algorithms}
}
Document
The Unfortunate-Flow Problem

Authors: Orna Kupferman and Gal Vardi


Abstract
In the traditional maximum-flow problem, the goal is to transfer maximum flow in a network by directing, in each vertex in the network, incoming flow into outgoing edges. The problem is one of the most fundamental problems in TCS, with application in numerous domains. The fact a maximal-flow algorithm directs the flow in all the vertices of the network corresponds to a setting in which the authority has control in all vertices. Many applications in which the maximal-flow problem is applied involve an adversarial setting, where the authority does not have such a control. We introduce and study the unfortunate flow problem, which studies the flow that is guaranteed to reach the target when the edges that leave the source are saturated, yet the most unfortunate decisions are taken in the vertices. When the incoming flow to a vertex is greater than the outgoing capacity, flow is lost. The problem models evacuation scenarios where traffic is stuck due to jams in junctions and communication networks where packets are dropped in overloaded routers. We study the theoretical properties of unfortunate flows, show that the unfortunate-flow problem is co-NP-complete and point to polynomial fragments. We introduce and study interesting variants of the problem: integral unfortunate flow, where the flow along edges must be integral, controlled unfortunate flow, where the edges from the source need not be saturated and may be controlled, and no-loss controlled unfortunate flow, where the controlled flow must not be lost.

Cite as

Orna Kupferman and Gal Vardi. The Unfortunate-Flow Problem. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 157:1-157:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kupferman_et_al:LIPIcs.ICALP.2018.157,
  author =	{Kupferman, Orna and Vardi, Gal},
  title =	{{The Unfortunate-Flow Problem}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{157:1--157:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.157},
  URN =		{urn:nbn:de:0030-drops-91613},
  doi =		{10.4230/LIPIcs.ICALP.2018.157},
  annote =	{Keywords: Flow Network, Graph Algorithms, Games}
}
Document
Spanning Trees With Edge Conflicts and Wireless Connectivity

Authors: Magnús M. Halldórsson, Guy Kortsarz, Pradipta Mitra, and Tigran Tonoyan


Abstract
We introduce the problem of finding a spanning tree along with a partition of the tree edges into fewest number of feasible sets, where constraints on the edges define feasibility. The motivation comes from wireless networking, where we seek to model the irregularities seen in actual wireless environments. Not all node pairs may be able to communicate, even if geographically close - thus, the available pairs are specified with a link graph {L}=(V,E). Also, signal attenuation need not follow a nice geometric formula - hence, interference is modeled by a conflict (hyper)graph {C}=(E,F) on the links. The objective is to maximize the efficiency of the communication, or equivalently, to minimize the length of a schedule of the tree edges in the form of a coloring. We find that in spite of all this generality, the problem can be approximated linearly in terms of a versatile parameter, the inductive independence of the interference graph. Specifically, we give a simple algorithm that attains a O(rho log n)-approximation, where n is the number of nodes and rho is the inductive independence, and show that near-linear dependence on rho is also necessary. We also treat an extension to Steiner trees, modeling multicasting, and obtain a comparable result. Our results suggest that several canonical assumptions of geometry, regularity and "niceness" in wireless settings can sometimes be relaxed without a significant hit in algorithm performance.

Cite as

Magnús M. Halldórsson, Guy Kortsarz, Pradipta Mitra, and Tigran Tonoyan. Spanning Trees With Edge Conflicts and Wireless Connectivity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 158:1-158:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{halldorsson_et_al:LIPIcs.ICALP.2018.158,
  author =	{Halld\'{o}rsson, Magn\'{u}s M. and Kortsarz, Guy and Mitra, Pradipta and Tonoyan, Tigran},
  title =	{{Spanning Trees With Edge Conflicts and Wireless Connectivity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{158:1--158:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.158},
  URN =		{urn:nbn:de:0030-drops-91627},
  doi =		{10.4230/LIPIcs.ICALP.2018.158},
  annote =	{Keywords: spanning trees, wireless capacity, aggregation, approximation algorithms}
}
Document
Eigenvector Computation and Community Detection in Asynchronous Gossip Models

Authors: Frederik Mallmann-Trenn, Cameron Musco, and Christopher Musco


Abstract
We give a simple distributed algorithm for computing adjacency matrix eigenvectors for the communication graph in an asynchronous gossip model. We show how to use this algorithm to give state-of-the-art asynchronous community detection algorithms when the communication graph is drawn from the well-studied stochastic block model. Our methods also apply to a natural alternative model of randomized communication, where nodes within a community communicate more frequently than nodes in different communities. Our analysis simplifies and generalizes prior work by forging a connection between asynchronous eigenvector computation and Oja's algorithm for streaming principal component analysis. We hope that our work serves as a starting point for building further connections between the analysis of stochastic iterative methods, like Oja's algorithm, and work on asynchronous and gossip-type algorithms for distributed computation.

Cite as

Frederik Mallmann-Trenn, Cameron Musco, and Christopher Musco. Eigenvector Computation and Community Detection in Asynchronous Gossip Models. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 159:1-159:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{mallmanntrenn_et_al:LIPIcs.ICALP.2018.159,
  author =	{Mallmann-Trenn, Frederik and Musco, Cameron and Musco, Christopher},
  title =	{{Eigenvector Computation and Community Detection in Asynchronous Gossip Models}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{159:1--159:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.159},
  URN =		{urn:nbn:de:0030-drops-91639},
  doi =		{10.4230/LIPIcs.ICALP.2018.159},
  annote =	{Keywords: block model, community detection, distributed clustering, eigenvector computation, gossip algorithms, population protocols}
}
Document
(Delta+1) Coloring in the Congested Clique Model

Authors: Merav Parter


Abstract
In this paper, we present improved algorithms for the (Delta+1) (vertex) coloring problem in the Congested Clique model of distributed computing. In this model, the input is a graph on n nodes, initially each node knows only its incident edges, and per round each two nodes can exchange O(log n) bits of information. Our key result is a randomized (Delta+1) vertex coloring algorithm that works in O(log log Delta * log^* Delta)-rounds. This is achieved by combining the recent breakthrough result of [Chang-Li-Pettie, STOC'18] in the {LOCAL} model and a degree reduction technique. We also get the following results with high probability: (1) (Delta+1)-coloring for Delta=O((n/log n)^{1-epsilon}) for any epsilon in (0,1), within O(log(1/epsilon)log^* Delta) rounds, and (2) (Delta+Delta^{1/2+o(1)})-coloring within O(log^* Delta) rounds. Turning to deterministic algorithms, we show a (Delta+1)-coloring algorithm that works in O(log Delta) rounds. Our new bounds provide exponential improvements over the state of the art.

Cite as

Merav Parter. (Delta+1) Coloring in the Congested Clique Model. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 160:1-160:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{parter:LIPIcs.ICALP.2018.160,
  author =	{Parter, Merav},
  title =	{{(Delta+1) Coloring in the Congested Clique Model}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{160:1--160:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.160},
  URN =		{urn:nbn:de:0030-drops-91640},
  doi =		{10.4230/LIPIcs.ICALP.2018.160},
  annote =	{Keywords: Distributed Graph Algorithms, Coloring, Congested Clique}
}
Document
CacheShuffle: A Family of Oblivious Shuffles

Authors: Sarvar Patel, Giuseppe Persiano, and Kevin Yeo


Abstract
We consider oblivious two-party protocols where a client outsources N blocks of private data to a server. The client wishes to access the data to perform operations in such a way that the access pattern does not leak information about the data and the operations. In this context, we consider oblivious shuffling with a focus on bandwidth efficient protocols for clients with small local memory. In the shuffling problem, the N outsourced blocks, B_1,...,B_N, are stored on the server according to an initial permutation pi. The client wishes to reshuffle the blocks according to permutation sigma. Oblivious shuffling is a building block in several applications that hide patterns of data access. In this paper, we introduce a generalization of the oblivious shuffling problem, the K-oblivious shuffling problem, and provide bandwidth efficient algorithms for a wide range of client storage requirements. The task of a K-oblivious shuffling algorithm is to shuffle N encrypted blocks that were previously randomly allocated on the server in such a way that an adversarial server learns nothing about either the new allocation of blocks or the block contents. The security guarantee must hold when an adversary has partial information on the initial placement of a subset of K <=N revealed blocks. The notion of oblivious shuffling is obtained for K=N. We first study the N-oblivious shuffling problem and start by presenting CacheShuffleRoot, that is tailored for clients with O(sqrt{N}) blocks of memory and uses approximately 4N blocks of bandwidth. CacheShuffleRoot is a 4x improvement over the previous best known N-oblivious shuffle for practical sizes of N. We then generalize CacheShuffleRoot to CacheShuffle that can be instantiated for any client memory size S and requires O(N log_S N) blocks of bandwidth. Next, we present K-oblivious shuffling algorithms that require 2N + f(K,S) blocks of bandwidth for all K and a wide range of S. Any extra bandwidth above the 2N lower bound depends solely on K and S. Specifically, for clients with O(K) blocks of memory, we present KCacheShuffleBasic that uses exactly 2N blocks of bandwidth. For clients with memory S <= K, we present KCacheShuffle, that requires 2N + O(K log_S K) blocks of bandwidth. Finally, motivated by applications to ORAMs, we consider the case where the server stores D dummy blocks whose contents are irrelevant in addition to the N real blocks. For this case, we design algorithm KCacheShuffleDummy that shuffles N+D blocks with K revealed blocks using O(K) blocks of client storage and approximately D+2N blocks of bandwidth.

Cite as

Sarvar Patel, Giuseppe Persiano, and Kevin Yeo. CacheShuffle: A Family of Oblivious Shuffles. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 161:1-161:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{patel_et_al:LIPIcs.ICALP.2018.161,
  author =	{Patel, Sarvar and Persiano, Giuseppe and Yeo, Kevin},
  title =	{{CacheShuffle: A Family of Oblivious Shuffles}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{161:1--161:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.161},
  URN =		{urn:nbn:de:0030-drops-91651},
  doi =		{10.4230/LIPIcs.ICALP.2018.161},
  annote =	{Keywords: Shuffling, Data-Oblivious Algorithms}
}
Document
Brief Announcement
Brief Announcement: MapReduce Algorithms for Massive Trees

Authors: MohammadHossein Bateni, Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, and Vahab Mirrokni


Abstract
Solving large-scale graph problems is a fundamental task in many real-world applications, and it is an increasingly important problem in data analysis. Despite the large effort in designing scalable graph algorithms, many classic graph problems lack algorithms that require only a sublinear number of machines and space in the input size. Specifically when the input graph is large and sparse, which is indeed the case for many real-world graphs, it becomes impossible to store and access all the vertices in one machine - something that is often taken for granted in designing algorithms for massive graphs. The theoretical model that we consider is the Massively Parallel Communications (MPC) model which is a popular theoretical model of MapReduce-like systems. In this paper, we give an algorithmic framework to adapt a large family of dynamic programs on MPC. We start by introducing two classes of dynamic programming problems, namely "(poly log)-expressible" and "linear-expressible" problems. We show that both classes can be solved efficiently using a sublinear number of machines and a sublinear memory per machine. To achieve this result, we introduce a series of techniques that can be plugged together. To illustrate the generality of our framework, we implement in O(log n) rounds of MPC, the dynamic programming solution of fundamental problems such as minimum bisection, k-spanning tree, maximum independent set, longest path, etc., when the input graph is a tree.

Cite as

MohammadHossein Bateni, Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, and Vahab Mirrokni. Brief Announcement: MapReduce Algorithms for Massive Trees. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 162:1-162:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bateni_et_al:LIPIcs.ICALP.2018.162,
  author =	{Bateni, MohammadHossein and Behnezhad, Soheil and Derakhshan, Mahsa and Hajiaghayi, MohammadTaghi and Mirrokni, Vahab},
  title =	{{Brief Announcement: MapReduce Algorithms for Massive Trees}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{162:1--162:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.162},
  URN =		{urn:nbn:de:0030-drops-91666},
  doi =		{10.4230/LIPIcs.ICALP.2018.162},
  annote =	{Keywords: MapReduce, Trees}
}
Document
Brief Announcement
Brief Announcement: Give Me Some Slack: Efficient Network Measurements

Authors: Ran Ben Basat, Gil Einziger, and Roy Friedman


Abstract
Many networking applications require timely access to recent network measurements, which can be captured using a sliding window model. Maintaining such measurements is a challenging task due to the fast line speed and scarcity of fast memory in routers. In this work, we study the impact of allowing slack in the window size on the asymptotic requirements of sliding window problems. That is, the algorithm can dynamically adjust the window size between W and W(1+tau) where tau is a small positive parameter. We demonstrate this model's attractiveness by showing that it enables efficient algorithms to problems such as Maximum and General-Summing that require Omega(W) bits even for constant factor approximations in the exact sliding window model. Additionally, for problems that admit sub-linear approximation algorithms such as Basic-Summing and Count-Distinct, the slack model enables a further asymptotic improvement. The main focus of our paper [{Ben Basat} et al., 2017] is on the widely studied Basic-Summing problem of computing the sum of the last W integers from {0,1 ...,R} in a stream. While it is known that Omega(W log{R}) bits are needed in the exact window model, we show that approximate windows allow an exponential space reduction for constant tau. Specifically, for tau=Theta(1), we present a space lower bound of Omega(log(RW)) bits. Additionally, we show an Omega(log ({W/epsilon})) lower bound for RW epsilon additive approximations and a Omega(log ({W/epsilon})+log log{R}) bits lower bound for (1+epsilon) multiplicative approximations. Our work is the first to study this problem in the exact and additive approximation settings. For all settings, we provide memory optimal algorithms that operate in worst case constant time. This strictly improves on the work of [Mayur Datar et al., 2002] for (1+epsilon)-multiplicative approximation that requires O(epsilon^{-1} log ({RW})log log ({RW})) space and performs updates in O(log ({RW})) worst case time. Finally, we show asymptotic improvements for the Count-Distinct, General-Summing and Maximum problems.

Cite as

Ran Ben Basat, Gil Einziger, and Roy Friedman. Brief Announcement: Give Me Some Slack: Efficient Network Measurements. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 163:1-163:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{benbasat_et_al:LIPIcs.ICALP.2018.163,
  author =	{Ben Basat, Ran and Einziger, Gil and Friedman, Roy},
  title =	{{Brief Announcement: Give Me Some Slack: Efficient Network Measurements}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{163:1--163:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.163},
  URN =		{urn:nbn:de:0030-drops-91672},
  doi =		{10.4230/LIPIcs.ICALP.2018.163},
  annote =	{Keywords: Streaming, Algorithms, Sliding window, Lower bounds}
}
Document
Brief Announcement
Brief Announcement: Towards an Abstract Model of User Retention Dynamics

Authors: Eli Ben-Sasson and Eden Saig


Abstract
A theoretical model is suggested for abstracting the interaction between an expert system and its users, with a focus on reputation and incentive compatibility. The model assumes users interact with the system while keeping in mind a single "retention parameter" that measures the strength of their belief in its predictive power, and the system's objective is to reinforce and maximize this parameter through "informative" and "correct" predictions. We define a natural class of retentive scoring rules to model the way users update their retention parameter and thus evaluate the experts they interact with. Assuming agents in the model have an incentive to report their true belief, these rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory. The difference between users and experts is modeled by imposing different limits on their predictive abilities, characterized by a parameter called memory span. We prove the monotonicity theorem ("more knowledge is better"), which shows that experts with larger memory span retain better in expectation. Finally, we focus on the intrinsic properties of phenomena that are amenable to collaborative discovery with a an expert system. Assuming user types (or "identities") are sampled from a distribution D, the retention complexity of D is the minimal initial retention value (or "strength of faith") that a user must have before approaching the expert, in order for the expert to retain that user throughout the collaborative discovery, during which the user "discovers" his true "identity". We then take a first step towards relating retention complexity to other established computational complexity measures by studying retention dynamics when D is a uniform distribution over a linear space.

Cite as

Eli Ben-Sasson and Eden Saig. Brief Announcement: Towards an Abstract Model of User Retention Dynamics. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 164:1-164:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ICALP.2018.164,
  author =	{Ben-Sasson, Eli and Saig, Eden},
  title =	{{Brief Announcement: Towards an Abstract Model of User Retention Dynamics}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{164:1--164:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.164},
  URN =		{urn:nbn:de:0030-drops-91682},
  doi =		{10.4230/LIPIcs.ICALP.2018.164},
  annote =	{Keywords: information elicitation, proper scoring rules, retention complexity}
}
Document
Brief Announcement
Brief Announcement: Energy Constrained Depth First Search

Authors: Shantanu Das, Dariusz Dereniowski, and Przemyslaw Uznanski


Abstract
Depth first search is a natural algorithmic technique for constructing a closed route that visits all vertices of a graph. The length of such route equals, in an edge-weighted tree, twice the total weight of all edges of the tree and this is asymptotically optimal over all exploration strategies. This paper considers a variant of such search strategies where the length of each route is bounded by a positive integer B (e.g. due to limited energy resources of the searcher). The objective is to cover all the edges of a tree T using the minimum number of routes, each starting and ending at the root and each being of length at most B. To this end, we analyze the following natural greedy tree traversal process that is based on decomposing a depth first search traversal into a sequence of limited length routes. Given any arbitrary depth first search traversal R of the tree T, we cover R with routes R_1,...,R_l, each of length at most B such that: R_i starts at the root, reaches directly the farthest point of R visited by R_{i-1}, then R_i continues along the path R as far as possible, and finally R_i returns to the root. We call the above algorithm piecemeal-DFS and we prove that it achieves the asymptotically minimal number of routes l, regardless of the choice of R. Our analysis also shows that the total length of the traversal (and thus the traversal time) of piecemeal-DFS is asymptotically minimum over all energy-constrained exploration strategies. The fact that R can be chosen arbitrarily means that the exploration strategy can be constructed in an online fashion when the input tree T is not known in advance. Each route R_i can be constructed without any knowledge of the yet unvisited part of T. Surprisingly, our results show that depth first search is efficient for energy constrained exploration of trees, even though it is known that the same does not hold for energy constrained exploration of arbitrary graphs.

Cite as

Shantanu Das, Dariusz Dereniowski, and Przemyslaw Uznanski. Brief Announcement: Energy Constrained Depth First Search. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 165:1-165:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.ICALP.2018.165,
  author =	{Das, Shantanu and Dereniowski, Dariusz and Uznanski, Przemyslaw},
  title =	{{Brief Announcement: Energy Constrained Depth First Search}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{165:1--165:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.165},
  URN =		{urn:nbn:de:0030-drops-91693},
  doi =		{10.4230/LIPIcs.ICALP.2018.165},
  annote =	{Keywords: DFS traversal, distributed algorithm, graph exploration, piecemeal exploration, online exploration}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail