LIPIcs, Volume 187

38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)



Thumbnail PDF

Event

STACS 2021, March 16-19, 2021, Saarbrücken, Germany (Virtual Conference)

Editors

Markus Bläser
  • Saarland University, Germany
Benjamin Monmege
  • Aix-Marseille University, France

Publication Details

  • published at: 2021-03-10
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-180-1
  • DBLP: db/conf/stacs/stacs2021

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 187, STACS 2021, Complete Volume

Authors: Markus Bläser and Benjamin Monmege


Abstract
LIPIcs, Volume 187, STACS 2021, Complete Volume

Cite as

38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 1-988, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{blaser_et_al:LIPIcs.STACS.2021,
  title =	{{LIPIcs, Volume 187, STACS 2021, Complete Volume}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{1--988},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021},
  URN =		{urn:nbn:de:0030-drops-136444},
  doi =		{10.4230/LIPIcs.STACS.2021},
  annote =	{Keywords: LIPIcs, Volume 187, STACS 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Markus Bläser and Benjamin Monmege


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blaser_et_al:LIPIcs.STACS.2021.0,
  author =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.0},
  URN =		{urn:nbn:de:0030-drops-136459},
  doi =		{10.4230/LIPIcs.STACS.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Optimization, Complexity and Invariant Theory (Invited Talk)

Authors: Peter Bürgisser


Abstract
Invariant and representation theory studies symmetries by means of group actions and is a well established source of unifying principles in mathematics and physics. Recent research suggests its relevance for complexity and optimization through quantitative and algorithmic questions. The goal of the talk is to give an introduction to new algorithmic and analysis techniques that extend convex optimization from the classical Euclidean setting to a general geodesic setting. We also point out surprising connections to a diverse set of problems in different areas of mathematics, statistics, computer science, and physics.

Cite as

Peter Bürgisser. Optimization, Complexity and Invariant Theory (Invited Talk). In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{burgisser:LIPIcs.STACS.2021.1,
  author =	{B\"{u}rgisser, Peter},
  title =	{{Optimization, Complexity and Invariant Theory}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.1},
  URN =		{urn:nbn:de:0030-drops-136460},
  doi =		{10.4230/LIPIcs.STACS.2021.1},
  annote =	{Keywords: geometric invariant theory, geodesic optimization, non-commutative optimization, null cone, operator scaling, moment polytope, orbit closure intersection, geometric programming}
}
Document
Invited Talk
First-Order Transductions of Graphs (Invited Talk)

Authors: Patrice Ossona de Mendez


Abstract
This paper is an extended abstract of my STACS 2021 talk "First-order transductions of graphs".

Cite as

Patrice Ossona de Mendez. First-Order Transductions of Graphs (Invited Talk). In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 2:1-2:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ossonademendez:LIPIcs.STACS.2021.2,
  author =	{Ossona de Mendez, Patrice},
  title =	{{First-Order Transductions of Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{2:1--2:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.2},
  URN =		{urn:nbn:de:0030-drops-136473},
  doi =		{10.4230/LIPIcs.STACS.2021.2},
  annote =	{Keywords: Finite model theory, structural graph theory}
}
Document
Invited Talk
On the Fluted Fragment (Invited Talk)

Authors: Lidia Tendera


Abstract
The fluted fragment is a recently rediscovered decidable fragment of first-order logic whose history is dating back to Quine and the sixties of the 20th century. The fragment is defined by fixing simultaneously the order in which variables occur in atomic formulas and the order of quantification of variables; no further restrictions concerning e.g. the number of variables, guardedness or usage of negation apply. In the talk we review some motivation and the history of the fragment, discuss the differences between the fluted fragment and other decidable fragments of first-order logic, present its basic model theoretic and algorithmic properties, and discuss recent work concerning limits of decidability of its extensions.

Cite as

Lidia Tendera. On the Fluted Fragment (Invited Talk). In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{tendera:LIPIcs.STACS.2021.3,
  author =	{Tendera, Lidia},
  title =	{{On the Fluted Fragment}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.3},
  URN =		{urn:nbn:de:0030-drops-136481},
  doi =		{10.4230/LIPIcs.STACS.2021.3},
  annote =	{Keywords: decidability, fluted fragment, first-order logic, complexity, satisfiability, non-elementary}
}
Document
Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

Authors: Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen


Abstract
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. 1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q^{13n+o(n)} time and requires poly(n)⋅ q^{16n/q²} memory. This tradeoff which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. 2) A quantum algorithm that runs in time 2^{0.9533n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [Divesh Aggarwal et al., 2015] that has a time and space complexity 2^{n+o(n)}. 3) A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [Yanlin Chen et al., 2018] that has the same space complexity. The time complexity of our classical and quantum algorithms are expressed using a quantity related to the kissing number of a lattice. A known upper bound of this quantity is 2^{0.402n}, but in practice for most lattices, it can be much smaller and even 2^o(n). In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.

Cite as

Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen. Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.STACS.2021.4,
  author =	{Aggarwal, Divesh and Chen, Yanlin and Kumar, Rajendra and Shen, Yixin},
  title =	{{Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.4},
  URN =		{urn:nbn:de:0030-drops-136494},
  doi =		{10.4230/LIPIcs.STACS.2021.4},
  annote =	{Keywords: Lattices, Shortest Vector Problem, Discrete Gaussian Sampling, Time-Space Tradeoff, Quantum computation, Bounded distance decoding}
}
Document
An FPT Algorithm for Elimination Distance to Bounded Degree Graphs

Authors: Akanksha Agrawal, Lawqueen Kanesh, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh


Abstract
In the literature on parameterized graph problems, there has been an increased effort in recent years aimed at exploring novel notions of graph edit-distance that are more powerful than the size of a modulator to a specific graph class. In this line of research, Bulian and Dawar [Algorithmica, 2016] introduced the notion of elimination distance and showed that deciding whether a given graph has elimination distance at most k to any minor-closed class of graphs is fixed-parameter tractable parameterized by k [Algorithmica, 2017]. They showed that Graph Isomorphism parameterized by the elimination distance to bounded degree graphs is fixed-parameter tractable and asked whether determining the elimination distance to the class of bounded degree graphs is fixed-parameter tractable. Recently, Lindermayr et al. [MFCS 2020] obtained a fixed-parameter algorithm for this problem in the special case where the input is restricted to K₅-minor free graphs. In this paper, we answer the question of Bulian and Dawar in the affirmative for general graphs. In fact, we give a more general result capturing elimination distance to any graph class characterized by a finite set of graphs as forbidden induced subgraphs.

Cite as

Akanksha Agrawal, Lawqueen Kanesh, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh. An FPT Algorithm for Elimination Distance to Bounded Degree Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 5:1-5:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.STACS.2021.5,
  author =	{Agrawal, Akanksha and Kanesh, Lawqueen and Panolan, Fahad and Ramanujan, M. S. and Saurabh, Saket},
  title =	{{An FPT Algorithm for Elimination Distance to Bounded Degree Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{5:1--5:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.5},
  URN =		{urn:nbn:de:0030-drops-136507},
  doi =		{10.4230/LIPIcs.STACS.2021.5},
  annote =	{Keywords: Elimination Distance, Fixed-parameter Tractability, Graph Modification}
}
Document
A Unified Framework of Quantum Walk Search

Authors: Simon Apers, András Gilyén, and Stacey Jeffery


Abstract
Many quantum algorithms critically rely on quantum walk search, or the use of quantum walks to speed up search problems on graphs. However, the main results on quantum walk search are scattered over different, incomparable frameworks, such as the hitting time framework, the MNRS framework, and the electric network framework. As a consequence, a number of pieces are currently missing. For example, recent work by Ambainis et al. (STOC'20) shows how quantum walks starting from the stationary distribution can always find elements quadratically faster. In contrast, the electric network framework allows quantum walks to start from an arbitrary initial state, but it only detects marked elements. We present a new quantum walk search framework that unifies and strengthens these frameworks, leading to a number of new results. For example, the new framework effectively finds marked elements in the electric network setting. The new framework also allows to interpolate between the hitting time framework, minimizing the number of walk steps, and the MNRS framework, minimizing the number of times elements are checked for being marked. This allows for a more natural tradeoff between resources. In addition to quantum walks and phase estimation, our new algorithm makes use of quantum fast-forwarding, similar to the recent results by Ambainis et al. This perspective also enables us to derive more general complexity bounds on the quantum walk algorithms, e.g., based on Monte Carlo type bounds of the corresponding classical walk. As a final result, we show how in certain cases we can avoid the use of phase estimation and quantum fast-forwarding, answering an open question of Ambainis et al.

Cite as

Simon Apers, András Gilyén, and Stacey Jeffery. A Unified Framework of Quantum Walk Search. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{apers_et_al:LIPIcs.STACS.2021.6,
  author =	{Apers, Simon and Gily\'{e}n, Andr\'{a}s and Jeffery, Stacey},
  title =	{{A Unified Framework of Quantum Walk Search}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.6},
  URN =		{urn:nbn:de:0030-drops-136511},
  doi =		{10.4230/LIPIcs.STACS.2021.6},
  annote =	{Keywords: Quantum Algorithms, Quantum Walks, Graph Theory}
}
Document
Achieving Anonymity via Weak Lower Bound Constraints for k-Median and k-Means

Authors: Anna Arutyunova and Melanie Schmidt


Abstract
We study k-clustering problems with lower bounds, including k-median and k-means clustering with lower bounds. In addition to the point set P and the number of centers k, a k-clustering problem with (uniform) lower bounds gets a number B. The solution space is restricted to clusterings where every cluster has at least B points. We demonstrate how to approximate k-median with lower bounds via a reduction to facility location with lower bounds, for which O(1)-approximation algorithms are known. Then we propose a new constrained clustering problem with lower bounds where we allow points to be assigned multiple times (to different centers). This means that for every point, the clustering specifies a set of centers to which it is assigned. We call this clustering with weak lower bounds. We give an 8-approximation for k-median clustering with weak lower bounds and an O(1)-approximation for k-means with weak lower bounds. We conclude by showing that at a constant increase in the approximation factor, we can restrict the number of assignments of every point to 2 (or, if we allow fractional assignments, to 1+ε). This also leads to the first bicritera approximation algorithm for k-means with (standard) lower bounds where bicriteria is interpreted in the sense that the lower bounds are violated by a constant factor. All algorithms in this paper run in time that is polynomial in n and k (and d for the Euclidean variants considered).

Cite as

Anna Arutyunova and Melanie Schmidt. Achieving Anonymity via Weak Lower Bound Constraints for k-Median and k-Means. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arutyunova_et_al:LIPIcs.STACS.2021.7,
  author =	{Arutyunova, Anna and Schmidt, Melanie},
  title =	{{Achieving Anonymity via Weak Lower Bound Constraints for k-Median and k-Means}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.7},
  URN =		{urn:nbn:de:0030-drops-136529},
  doi =		{10.4230/LIPIcs.STACS.2021.7},
  annote =	{Keywords: Clustering with Constraints, lower Bounds, k-Means, Anonymity}
}
Document
Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata

Authors: Corentin Barloy and Lorenzo Clemente


Abstract
We study the universality and inclusion problems for register automata over equality data (A, =). We show that the universality L(B) = (Σ × A)^* and inclusion problems L(A) ⊆ L(B) B can be solved with 2-EXPTIME complexity when both automata are without guessing and B is unambiguous, improving on the currently best-known 2-EXPSPACE upper bound by Mottet and Quaas. When the number of registers of both automata is fixed, we obtain a lower EXPTIME complexity, also improving the EXPSPACE upper bound from Mottet and Quaas for fixed number of registers. We reduce inclusion to universality, and then we reduce universality to the problem of counting the number of orbits of runs of the automaton. We show that the orbit-counting function satisfies a system of bidimensional linear recursive equations with polynomial coefficients (linrec), which generalises analogous recurrences for the Stirling numbers of the second kind, and then we show that universality reduces to the zeroness problem for linrec sequences. While such a counting approach is classical and has successfully been applied to unambiguous finite automata and grammars over finite alphabets, its application to register automata over infinite alphabets is novel. We provide two algorithms to decide the zeroness problem for bidimensional linear recursive sequences arising from orbit-counting functions. Both algorithms rely on techniques from linear non-commutative algebra. The first algorithm performs variable elimination and has elementary complexity. The second algorithm is a refined version of the first one and it relies on the computation of the Hermite normal form of matrices over a skew polynomial field. The second algorithm yields an EXPTIME decision procedure for the zeroness problem of linrec sequences, which in turn yields the claimed bounds for the universality and inclusion problems of register automata.

Cite as

Corentin Barloy and Lorenzo Clemente. Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{barloy_et_al:LIPIcs.STACS.2021.8,
  author =	{Barloy, Corentin and Clemente, Lorenzo},
  title =	{{Bidimensional Linear Recursive Sequences and Universality of Unambiguous Register Automata}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.8},
  URN =		{urn:nbn:de:0030-drops-136533},
  doi =		{10.4230/LIPIcs.STACS.2021.8},
  annote =	{Keywords: unambiguous register automata, universality and inclusion problems, multi-dimensional linear recurrence sequences}
}
Document
Tight Approximation Guarantees for Concave Coverage Problems

Authors: Siddharth Barman, Omar Fawzi, and Paul Fermé


Abstract
In the maximum coverage problem, we are given subsets T_1, …, T_m of a universe [n] along with an integer k and the objective is to find a subset S ⊆ [m] of size k that maximizes C(S) : = |⋃_{i ∈ S} T_i|. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of 1-e^{-1}. In this work we consider a generalization of this problem wherein an element a can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function φ, we define C^{φ}(S) := ∑_{a ∈ [n]}w_aφ(|S|_a), where |S|_a = |{i ∈ S : a ∈ T_i}|. The standard maximum coverage problem corresponds to taking φ(j) = min{j,1}. For any such φ, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of φ, defined by α_{φ} : = min_{x ∈ ℕ^*} 𝔼[φ(Poi(x))] / φ(𝔼[Poi(x)]). Complementing this approximation guarantee, we establish a matching NP-hardness result when φ grows in a sublinear way. As special cases, we improve the result of [Siddharth Barman et al., 2020] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [Szymon Dudycz et al., 2020] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.

Cite as

Siddharth Barman, Omar Fawzi, and Paul Fermé. Tight Approximation Guarantees for Concave Coverage Problems. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{barman_et_al:LIPIcs.STACS.2021.9,
  author =	{Barman, Siddharth and Fawzi, Omar and Ferm\'{e}, Paul},
  title =	{{Tight Approximation Guarantees for Concave Coverage Problems}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.9},
  URN =		{urn:nbn:de:0030-drops-136543},
  doi =		{10.4230/LIPIcs.STACS.2021.9},
  annote =	{Keywords: Approximation Algorithms, Coverage Problems, Concave Function}
}
Document
Symmetric Promise Constraint Satisfaction Problems: Beyond the Boolean Case

Authors: Libor Barto, Diego Battistelli, and Kevin M. Berg


Abstract
The Promise Constraint Satisfaction Problem (PCSP) is a recently introduced vast generalization of the Constraint Satisfaction Problem (CSP). We investigate the computational complexity of a class of PCSPs beyond the most studied cases - approximation variants of satisfiability and graph coloring problems. We give an almost complete classification for the class of PCSPs of the form: given a 3-uniform hypergraph that has an admissible 2-coloring, find an admissible 3-coloring, where admissibility is given by a ternary symmetric relation. The only PCSP of this sort whose complexity is left open in this work is a natural hypergraph coloring problem, where admissibility is given by the relation "if two colors are equal, then the remaining one is higher."

Cite as

Libor Barto, Diego Battistelli, and Kevin M. Berg. Symmetric Promise Constraint Satisfaction Problems: Beyond the Boolean Case. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 10:1-10:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{barto_et_al:LIPIcs.STACS.2021.10,
  author =	{Barto, Libor and Battistelli, Diego and Berg, Kevin M.},
  title =	{{Symmetric Promise Constraint Satisfaction Problems: Beyond the Boolean Case}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{10:1--10:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.10},
  URN =		{urn:nbn:de:0030-drops-136557},
  doi =		{10.4230/LIPIcs.STACS.2021.10},
  annote =	{Keywords: constraint satisfaction problems, promise constraint satisfaction, Boolean PCSP, polymorphism}
}
Document
A Characterization of Wreath Products Where Knapsack Is Decidable

Authors: Pascal Bergsträßer, Moses Ganardi, and Georg Zetzsche


Abstract
The knapsack problem for groups was introduced by Miasnikov, Nikolaev, and Ushakov. It is defined for each finitely generated group G and takes as input group elements g_1,…,g_n,g ∈ G and asks whether there are x_1,…,x_n ≥ 0 with g_1^{x_1}⋯ g_n^{x_n} = g. We study the knapsack problem for wreath products G≀H of groups G and H. Our main result is a characterization of those wreath products G≀H for which the knapsack problem is decidable. The characterization is in terms of decidability properties of the indiviual factors G and H. To this end, we introduce two decision problems, the intersection knapsack problem and its restriction, the positive intersection knapsack problem. Moreover, we apply our main result to H₃(ℤ), the discrete Heisenberg group, and to Baumslag-Solitar groups BS(1,q) for q ≥ 1. First, we show that the knapsack problem is undecidable for G≀H₃(ℤ) for any G ≠ 1. This implies that for G ≠ 1 and for infinite and virtually nilpotent groups H, the knapsack problem for G≀H is decidable if and only if H is virtually abelian and solvability of systems of exponent equations is decidable for G. Second, we show that the knapsack problem is decidable for G≀BS(1,q) if and only if solvability of systems of exponent equations is decidable for G.

Cite as

Pascal Bergsträßer, Moses Ganardi, and Georg Zetzsche. A Characterization of Wreath Products Where Knapsack Is Decidable. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bergstraer_et_al:LIPIcs.STACS.2021.11,
  author =	{Bergstr\"{a}{\ss}er, Pascal and Ganardi, Moses and Zetzsche, Georg},
  title =	{{A Characterization of Wreath Products Where Knapsack Is Decidable}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.11},
  URN =		{urn:nbn:de:0030-drops-136566},
  doi =		{10.4230/LIPIcs.STACS.2021.11},
  annote =	{Keywords: knapsack, wreath products, decision problems in group theory, decidability, discrete Heisenberg group, Baumslag-Solitar groups}
}
Document
Synchronizing Strongly Connected Partial DFAs

Authors: Mikhail V. Berlinkov, Robert Ferens, Andrew Ryzhikov, and Marek Szykuła


Abstract
We study synchronizing partial DFAs, which extend the classical concept of synchronizing complete DFAs and are a special case of synchronizing unambiguous NFAs. A partial DFA is called synchronizing if it has a word (called a reset word) whose action brings a non-empty subset of states to a unique state and is undefined for all other states. While in the general case the problem of checking whether a partial DFA is synchronizing is PSPACE-complete, we show that in the strongly connected case this problem can be efficiently reduced to the same problem for a complete DFA. Using combinatorial, algebraic, and formal languages methods, we develop techniques that relate main synchronization problems for strongly connected partial DFAs with the same problems for complete DFAs. In particular, this includes the Černý and the rank conjectures, the problem of finding a reset word, and upper bounds on the length of the shortest reset words of literal automata of finite prefix codes. We conclude that solving fundamental synchronization problems is equally hard in both models, as an essential improvement of the results for one model implies an improvement for the other.

Cite as

Mikhail V. Berlinkov, Robert Ferens, Andrew Ryzhikov, and Marek Szykuła. Synchronizing Strongly Connected Partial DFAs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{berlinkov_et_al:LIPIcs.STACS.2021.12,
  author =	{Berlinkov, Mikhail V. and Ferens, Robert and Ryzhikov, Andrew and Szyku{\l}a, Marek},
  title =	{{Synchronizing Strongly Connected Partial DFAs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.12},
  URN =		{urn:nbn:de:0030-drops-136579},
  doi =		{10.4230/LIPIcs.STACS.2021.12},
  annote =	{Keywords: \v{C}ern\'{y} conjecture, literal automaton, partial automaton, prefix code, rank conjecture, reset threshold, reset word, synchronizing automaton, synchronizing word}
}
Document
On Euclidean Steiner (1+ε)-Spanners

Authors: Sujoy Bhore and Csaba D. Tóth


Abstract
Lightness and sparsity are two natural parameters for Euclidean (1+ε)-spanners. Classical results show that, when the dimension d ∈ ℕ and ε > 0 are constant, every set S of n points in d-space admits an (1+ε)-spanners with O(n) edges and weight proportional to that of the Euclidean MST of S. Tight bounds on the dependence on ε > 0 for constant d ∈ ℕ have been established only recently. Le and Solomon (FOCS 2019) showed that Steiner points can substantially improve the lightness and sparsity of a (1+ε)-spanner. They gave upper bounds of Õ(ε^{-(d+1)/2}) for the minimum lightness in dimensions d ≥ 3, and Õ(ε^{-(d-1))/2}) for the minimum sparsity in d-space for all d ≥ 1. They obtained lower bounds only in the plane (d = 2). Le and Solomon (ESA 2020) also constructed Steiner (1+ε)-spanners of lightness O(ε^{-1}logΔ) in the plane, where Δ ∈ Ω(log n) is the spread of S, defined as the ratio between the maximum and minimum distance between a pair of points. In this work, we improve several bounds on the lightness and sparsity of Euclidean Steiner (1+ε)-spanners. Using a new geometric analysis, we establish lower bounds of Ω(ε^{-d/2}) for the lightness and Ω(ε^{-(d-1)/2}) for the sparsity of such spanners in Euclidean d-space for all d ≥ 2. We use the geometric insight from our lower bound analysis to construct Steiner (1+ε)-spanners of lightness O(ε^{-1}log n) for n points in Euclidean plane.

Cite as

Sujoy Bhore and Csaba D. Tóth. On Euclidean Steiner (1+ε)-Spanners. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhore_et_al:LIPIcs.STACS.2021.13,
  author =	{Bhore, Sujoy and T\'{o}th, Csaba D.},
  title =	{{On Euclidean Steiner (1+\epsilon)-Spanners}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.13},
  URN =		{urn:nbn:de:0030-drops-136586},
  doi =		{10.4230/LIPIcs.STACS.2021.13},
  annote =	{Keywords: Geometric spanner, (1+\epsilon)-spanner, lightness, sparsity, minimum weight}
}
Document
A Nearly Optimal Deterministic Online Algorithm for Non-Metric Facility Location

Authors: Marcin Bienkowski, Björn Feldkord, and Paweł Schmidt


Abstract
In the online non-metric variant of the facility location problem, there is a given graph consisting of a set F of facilities (each with a certain opening cost), a set C of potential clients, and weighted connections between them. The online part of the input is a sequence of clients from C, and in response to any requested client, an online algorithm may open an additional subset of facilities and must connect the given client to an open facility. We give an online, polynomial-time deterministic algorithm for this problem, with a competitive ratio of O(log |F| ⋅ (log |C| + log log |F|)). The result is optimal up to loglog factors. Our algorithm improves over the O((log |C| + log |F|) ⋅ (log |C| + log log |F|))-competitive construction that first reduces the facility location instance to a set cover one and then later solves such instance using the deterministic algorithm by Alon et al. [TALG 2006]. This is an asymptotic improvement in a typical scenario where |F| ≪ |C|. We achieve this by a more direct approach: we design an algorithm for a fractional relaxation of the non-metric facility location problem with clustered facilities. To handle the constraints of such non-covering LP, we combine the dual fitting and multiplicative weight updates approach. By maintaining certain additional monotonicity properties of the created fractional solution, we can handle the dependencies between facilities and connections in a rounding routine. Our result, combined with the algorithm by Naor et al. [FOCS 2011] yields the first deterministic algorithm for the online node-weighted Steiner tree problem. The resulting competitive ratio is O(log k ⋅ log² 𝓁) on graphs of 𝓁 nodes and k terminals.

Cite as

Marcin Bienkowski, Björn Feldkord, and Paweł Schmidt. A Nearly Optimal Deterministic Online Algorithm for Non-Metric Facility Location. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 14:1-14:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.STACS.2021.14,
  author =	{Bienkowski, Marcin and Feldkord, Bj\"{o}rn and Schmidt, Pawe{\l}},
  title =	{{A Nearly Optimal Deterministic Online Algorithm for Non-Metric Facility Location}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{14:1--14:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.14},
  URN =		{urn:nbn:de:0030-drops-136598},
  doi =		{10.4230/LIPIcs.STACS.2021.14},
  annote =	{Keywords: Online algorithms, deterministic rounding, linear programming, facility location, set cover}
}
Document
An Asymptotically Fast Polynomial Space Algorithm for Hamiltonicity Detection in Sparse Directed Graphs

Authors: Andreas Björklund


Abstract
We present a polynomial space Monte Carlo algorithm that given a directed graph on n vertices and average outdegree δ, detects if the graph has a Hamiltonian cycle in 2^{n-Ω(n/δ)} time. This asymptotic scaling of the savings in the running time matches the fastest known exponential space algorithm by Björklund and Williams ICALP 2019. By comparison, the previously best polynomial space algorithm by Kowalik and Majewski IPEC 2020 guarantees a 2^{n-Ω(n/(2^δ))} time bound. Our algorithm combines for the first time the idea of obtaining a fingerprint of the presence of a Hamiltonian cycle through an inclusion-exclusion summation over the Laplacian of the graph from Björklund, Kaski, and Koutis ICALP 2017, with the idea of sieving for the non-zero terms in an inclusion-exclusion summation by listing solutions to systems of linear equations over ℤ₂ from Björklund and Husfeldt FOCS 2013.

Cite as

Andreas Björklund. An Asymptotically Fast Polynomial Space Algorithm for Hamiltonicity Detection in Sparse Directed Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 15:1-15:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bjorklund:LIPIcs.STACS.2021.15,
  author =	{Bj\"{o}rklund, Andreas},
  title =	{{An Asymptotically Fast Polynomial Space Algorithm for Hamiltonicity Detection in Sparse Directed Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{15:1--15:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.15},
  URN =		{urn:nbn:de:0030-drops-136601},
  doi =		{10.4230/LIPIcs.STACS.2021.15},
  annote =	{Keywords: Hamiltonian cycle, directed graph, worst case analysis algorithm}
}
Document
Online Simple Knapsack with Reservation Costs

Authors: Hans-Joachim Böckenhauer, Elisabet Burjons, Juraj Hromkovič, Henri Lotze, and Peter Rossmanith


Abstract
In the Online Simple Knapsack Problem we are given a knapsack of unit size 1. Items of size smaller or equal to 1 are presented in an iterative fashion and an algorithm has to decide whether to permanently reject or include each item into the knapsack without any knowledge about the rest of the instance. The goal is then to pack the knapsack as full as possible. In this work, we introduce a third option additional to those of packing and rejecting an item, namely that of reserving an item for the cost of a fixed fraction α of its size. An algorithm may pay this fraction in order to postpone its decision on whether to include or reject the item until after the last item of the instance was presented. While the classical Online Simple Knapsack Problem does not admit any constantly bounded competitive ratio in the deterministic setting, we find that adding the possibility of reservation makes the problem constantly competitive, with varying competitive ratios depending on the value of α. We give upper and lower bounds for the whole range of reservation costs, with tight bounds for costs up to 1/6 - an area that is strictly 2-competitive - , for costs between √2-1 and 1 - an area that is strictly (2+α)-competitive up to ϕ -1, and strictly 1/(1-α)-competitive above ϕ-1, where ϕ is the golden ratio. With our analysis, we find a counterintuitive characteristic of the problem: Intuitively, one would expect that the possibility of rejecting items becomes more and more helpful for an online algorithm with growing reservation costs. However, for higher reservation costs above √2-1, an algorithm that is unable to reject any items tightly matches the lower bound and is thus the best possible. On the other hand, for any positive reservation cost smaller than 1/6, any algorithm that is unable to reject any items performs considerably worse than one that is able to reject.

Cite as

Hans-Joachim Böckenhauer, Elisabet Burjons, Juraj Hromkovič, Henri Lotze, and Peter Rossmanith. Online Simple Knapsack with Reservation Costs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 16:1-16:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bockenhauer_et_al:LIPIcs.STACS.2021.16,
  author =	{B\"{o}ckenhauer, Hans-Joachim and Burjons, Elisabet and Hromkovi\v{c}, Juraj and Lotze, Henri and Rossmanith, Peter},
  title =	{{Online Simple Knapsack with Reservation Costs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{16:1--16:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.16},
  URN =		{urn:nbn:de:0030-drops-136613},
  doi =		{10.4230/LIPIcs.STACS.2021.16},
  annote =	{Keywords: Online problem, Simple knapsack, Reservation costs}
}
Document
Inapproximability of Diameter in Super-Linear Time: Beyond the 5/3 Ratio

Authors: Édouard Bonnet


Abstract
We show, assuming the Strong Exponential Time Hypothesis, that for every ε > 0, approximating directed Diameter on m-arc graphs within ratio 7/4 - ε requires m^{4/3 - o(1)} time. Our construction uses non-negative edge weights but even holds for sparse digraphs, i.e., for which the number of vertices n and the number of arcs m satisfy m = O˜(n). This is the first result that conditionally rules out a near-linear time 5/3-approximation for a variant of Diameter.

Cite as

Édouard Bonnet. Inapproximability of Diameter in Super-Linear Time: Beyond the 5/3 Ratio. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bonnet:LIPIcs.STACS.2021.17,
  author =	{Bonnet, \'{E}douard},
  title =	{{Inapproximability of Diameter in Super-Linear Time: Beyond the 5/3 Ratio}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.17},
  URN =		{urn:nbn:de:0030-drops-136623},
  doi =		{10.4230/LIPIcs.STACS.2021.17},
  annote =	{Keywords: Diameter, inapproximability, SETH lower bounds, k-Orthogonal Vectors}
}
Document
The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem

Authors: Ulrich A. Brodowsky and Stefan Hougardy


Abstract
The 2-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces two edges of the tour by two other edges, as long as this yields a shorter tour. We will prove that for Euclidean Traveling Salesman Problems with n cities the approximation ratio of the 2-Opt heuristic is Θ(log n / log log n). This improves the upper bound of O(log n) given by Chandra, Karloff, and Tovey [Barun Chandra et al., 1999] in 1999.

Cite as

Ulrich A. Brodowsky and Stefan Hougardy. The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brodowsky_et_al:LIPIcs.STACS.2021.18,
  author =	{Brodowsky, Ulrich A. and Hougardy, Stefan},
  title =	{{The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.18},
  URN =		{urn:nbn:de:0030-drops-136634},
  doi =		{10.4230/LIPIcs.STACS.2021.18},
  annote =	{Keywords: traveling salesman problem, metric TSP, Euclidean TSP, 2-Opt, approximation algorithm}
}
Document
A Framework of Quantum Strong Exponential-Time Hypotheses

Authors: Harry Buhrman, Subhasree Patro, and Florian Speelman


Abstract
The strong exponential-time hypothesis (SETH) is a commonly used conjecture in the field of complexity theory. It essentially states that determining whether a CNF formula is satisfiable can not be done faster than exhaustive search over all possible assignments. This hypothesis and its variants gave rise to a fruitful field of research, fine-grained complexity, obtaining (mostly tight) lower bounds for many problems in P whose unconditional lower bounds are very likely beyond current techniques. In this work, we introduce an extensive framework of Quantum Strong Exponential-Time Hypotheses, as quantum analogues to what SETH is for classical computation. Using the QSETH framework, we are able to translate quantum query lower bounds on black-box problems to conditional quantum time lower bounds for many problems in P. As an example, we provide a conditional quantum time lower bound of Ω(n^1.5) for the Longest Common Subsequence and Edit Distance problems. We also show that the n² SETH-based lower bound for a recent scheme for Proofs of Useful Work carries over to the quantum setting using our framework, maintaining a quadratic gap between verifier and prover. Lastly, we show that the assumptions in our framework can not be simplified further with relativizing proof techniques, as they are false in relativized worlds.

Cite as

Harry Buhrman, Subhasree Patro, and Florian Speelman. A Framework of Quantum Strong Exponential-Time Hypotheses. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{buhrman_et_al:LIPIcs.STACS.2021.19,
  author =	{Buhrman, Harry and Patro, Subhasree and Speelman, Florian},
  title =	{{A Framework of Quantum Strong Exponential-Time Hypotheses}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{19:1--19:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.19},
  URN =		{urn:nbn:de:0030-drops-136642},
  doi =		{10.4230/LIPIcs.STACS.2021.19},
  annote =	{Keywords: complexity theory, fine-grained complexity, longest common subsequence, edit distance, quantum query complexity, strong exponential-time hypothesis}
}
Document
The Complexity of the Distributed Constraint Satisfaction Problem

Authors: Silvia Butti and Victor Dalmau


Abstract
We study the complexity of the Distributed Constraint Satisfaction Problem (DCSP) on a synchronous, anonymous network from a theoretical standpoint. In this setting, variables and constraints are controlled by agents which communicate with each other by sending messages through fixed communication channels. Our results endorse the well-known fact from classical CSPs that the complexity of fixed-template computational problems depends on the template’s invariance under certain operations. Specifically, we show that DCSP(Γ) is polynomial-time tractable if and only if Γ is invariant under symmetric polymorphisms of all arities. Otherwise, there are no algorithms that solve DCSP(Γ) in finite time. We also show that the same condition holds for the search variant of DCSP. Collaterally, our results unveil a feature of the processes' neighbourhood in a distributed network, its iterated degree, which plays a major role in the analysis. We explore this notion establishing a tight connection with the basic linear programming relaxation of a CSP.

Cite as

Silvia Butti and Victor Dalmau. The Complexity of the Distributed Constraint Satisfaction Problem. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 20:1-20:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{butti_et_al:LIPIcs.STACS.2021.20,
  author =	{Butti, Silvia and Dalmau, Victor},
  title =	{{The Complexity of the Distributed Constraint Satisfaction Problem}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{20:1--20:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.20},
  URN =		{urn:nbn:de:0030-drops-136654},
  doi =		{10.4230/LIPIcs.STACS.2021.20},
  annote =	{Keywords: Constraint Satisfaction Problems, Distributed Algorithms, Polymorphisms}
}
Document
Distance Computations in the Hybrid Network Model via Oracle Simulations

Authors: Keren Censor-Hillel, Dean Leitersdorf, and Volodymyr Polosukhin


Abstract
The Hybrid network model was introduced in [Augustine et al., SODA '20] for laying down a theoretical foundation for networks which combine two possible modes of communication: One mode allows high-bandwidth communication with neighboring nodes, and the other allows low-bandwidth communication over few long-range connections at a time. This fundamentally abstracts networks such as hybrid data centers, and class-based software-defined networks. Our technical contribution is a density-aware approach that allows us to simulate a set of oracles for an overlay skeleton graph over a Hybrid network. As applications of our oracle simulations, with additional machinery that we provide, we derive fast algorithms for fundamental distance-related tasks. One of our core contributions is an algorithm in the Hybrid model for computing exact weighted shortest paths from Õ(n^{1/3}) sources which completes in Õ(n^{1/3}) rounds w.h.p. This improves, in both the runtime and the number of sources, upon the algorithm of [Kuhn and Schneider, PODC ’20], which computes shortest paths from a single source in Õ(n^{2/5}) rounds w.h.p. We additionally show a 2-approximation for weighted diameter and a (1+ε)-approximation for unweighted diameter, both in Õ(n^{1/3}) rounds w.h.p., which is comparable to the ̃ Ω(n^{1/3}) lower bound of [Kuhn and Schneider, PODC ’20] for a (2-ε)-approximation for weighted diameter and an exact unweighted diameter. We also provide fast distance approximations from multiple sources and fast approximations for eccentricities.

Cite as

Keren Censor-Hillel, Dean Leitersdorf, and Volodymyr Polosukhin. Distance Computations in the Hybrid Network Model via Oracle Simulations. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 21:1-21:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{censorhillel_et_al:LIPIcs.STACS.2021.21,
  author =	{Censor-Hillel, Keren and Leitersdorf, Dean and Polosukhin, Volodymyr},
  title =	{{Distance Computations in the Hybrid Network Model via Oracle Simulations}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{21:1--21:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.21},
  URN =		{urn:nbn:de:0030-drops-136663},
  doi =		{10.4230/LIPIcs.STACS.2021.21},
  annote =	{Keywords: Distributed graph algorithms, Hybrid network model, Distance computations}
}
Document
Simple Multi-Pass Streaming Algorithms for Skyline Points and Extreme Points

Authors: Timothy M. Chan and Saladi Rahul


Abstract
In this paper, we present simple randomized multi-pass streaming algorithms for fundamental computational geometry problems of finding the skyline (maximal) points and the extreme points of the convex hull. For the skyline problem, one of our algorithm occupies O(h) space and performs O(log n) passes, where h is the number of skyline points. This improves the space bound of the currently best known result by Das Sarma, Lall, Nanongkai, and Xu [VLDB'09] by a logarithmic factor. For the extreme points problem, we present the first non-trivial result for any constant dimension greater than two: an O(h log^{O(1)}n) space and O(log^dn) pass algorithm, where h is the number of extreme points. Finally, we argue why randomization seems unavoidable for these problems, by proving lower bounds on the performance of deterministic algorithms for a related problem of finding maximal elements in a poset.

Cite as

Timothy M. Chan and Saladi Rahul. Simple Multi-Pass Streaming Algorithms for Skyline Points and Extreme Points. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.STACS.2021.22,
  author =	{Chan, Timothy M. and Rahul, Saladi},
  title =	{{Simple Multi-Pass Streaming Algorithms for Skyline Points and Extreme Points}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.22},
  URN =		{urn:nbn:de:0030-drops-136674},
  doi =		{10.4230/LIPIcs.STACS.2021.22},
  annote =	{Keywords: multi-pass streaming algorithms, skyline, convex hull, extreme points, randomized algorithms}
}
Document
One-Tape Turing Machine and Branching Program Lower Bounds for MCSP

Authors: Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida


Abstract
For a size parameter s: ℕ → ℕ, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0,1}ⁿ → {0,1} (represented by a string of length N : = 2ⁿ) is at most a threshold s(n). A recent line of work exhibited "hardness magnification" phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ₁ > 0, if MCSP[2^{μ₁⋅ n}] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N^{1.01}, then P≠NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1) A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2^{μ₂⋅n}] in time N^{1.99}, for some constant μ₂ > μ₁. 2) A non-deterministic (or parity) branching program of size o(N^{1.5}/log N) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nečiporuk method to MKTP, which previously appeared to be difficult. 3) The size of any non-deterministic, co-non-deterministic, or parity branching program computing MCSP is at least N^{1.5-o(1)}. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results: 1) There exists a (local) hitting set generator with seed length Õ(√N) secure against read-once polynomial-size non-deterministic branching programs on N-bit inputs. 2) Any read-once co-non-deterministic branching program computing MCSP must have size at least 2^Ω̃(N).

Cite as

Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida. One-Tape Turing Machine and Branching Program Lower Bounds for MCSP. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 23:1-23:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cheraghchi_et_al:LIPIcs.STACS.2021.23,
  author =	{Cheraghchi, Mahdi and Hirahara, Shuichi and Myrisiotis, Dimitrios and Yoshida, Yuichi},
  title =	{{One-Tape Turing Machine and Branching Program Lower Bounds for MCSP}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{23:1--23:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.23},
  URN =		{urn:nbn:de:0030-drops-136681},
  doi =		{10.4230/LIPIcs.STACS.2021.23},
  annote =	{Keywords: Minimum Circuit Size Problem, Kolmogorov Complexity, One-Tape Turing Machines, Branching Programs, Lower Bounds, Pseudorandom Generators, Hitting Set Generators}
}
Document
Inference and Mutual Information on Random Factor Graphs

Authors: Amin Coja-Oghlan, Max Hahn-Klimroth, Philipp Loick, Noela Müller, Konstantinos Panagiotou, and Matija Pasch


Abstract
Random factor graphs provide a powerful framework for the study of inference problems such as decoding problems or the stochastic block model. Information-theoretically the key quantity of interest is the mutual information between the observed factor graph and the underlying ground truth around which the factor graph was created; in the stochastic block model, this would be the planted partition. The mutual information gauges whether and how well the ground truth can be inferred from the observable data. For a very general model of random factor graphs we verify a formula for the mutual information predicted by physics techniques. As an application we prove a conjecture about low-density generator matrix codes from [Montanari: IEEE Transactions on Information Theory 2005]. Further applications include phase transitions of the stochastic block model and the mixed k-spin model from physics.

Cite as

Amin Coja-Oghlan, Max Hahn-Klimroth, Philipp Loick, Noela Müller, Konstantinos Panagiotou, and Matija Pasch. Inference and Mutual Information on Random Factor Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cojaoghlan_et_al:LIPIcs.STACS.2021.24,
  author =	{Coja-Oghlan, Amin and Hahn-Klimroth, Max and Loick, Philipp and M\"{u}ller, Noela and Panagiotou, Konstantinos and Pasch, Matija},
  title =	{{Inference and Mutual Information on Random Factor Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.24},
  URN =		{urn:nbn:de:0030-drops-136692},
  doi =		{10.4230/LIPIcs.STACS.2021.24},
  annote =	{Keywords: Information theory, random factor graphs, inference problems, phase transitions}
}
Document
The Edit Distance to k-Subsequence Universality

Authors: Joel D. Day, Pamela Fleischmann, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer


Abstract
A word u is a subsequence of another word w if u can be obtained from w by deleting some of its letters. In the early 1970s, Imre Simon defined the relation ∼_k (called now Simon-Congruence) as follows: two words having exactly the same set of subsequences of length at most k are ∼_k-congruent. This relation was central in defining and analysing piecewise testable languages, but has found many applications in areas such as algorithmic learning theory, databases theory, or computational linguistics. Recently, it was shown that testing whether two words are ∼_k-congruent can be done in optimal linear time. Thus, it is a natural next step to ask, for two words w and u which are not ∼_k-equivalent, what is the minimal number of edit operations that we need to perform on w in order to obtain a word which is ∼_k-equivalent to u. In this paper, we consider this problem in a setting which seems interesting: when u is a k-subsequence universal word. A word u with alph(u) = Σ is called k-subsequence universal if the set of subsequences of length k of u contains all possible words of length k over Σ. As such, our results are a series of efficient algorithms computing the edit distance from w to the language of k-subsequence universal words.

Cite as

Joel D. Day, Pamela Fleischmann, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer. The Edit Distance to k-Subsequence Universality. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{day_et_al:LIPIcs.STACS.2021.25,
  author =	{Day, Joel D. and Fleischmann, Pamela and Kosche, Maria and Ko{\ss}, Tore and Manea, Florin and Siemer, Stefan},
  title =	{{The Edit Distance to k-Subsequence Universality}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.25},
  URN =		{urn:nbn:de:0030-drops-136705},
  doi =		{10.4230/LIPIcs.STACS.2021.25},
  annote =	{Keywords: Subsequence, Scattered factor, Subword, Universality, k-subsequence universality, Edit distance, Efficient algorithms}
}
Document
Barrington Plays Cards: The Complexity of Card-Based Protocols

Authors: Pavel Dvořák and Michal Koucký


Abstract
In this paper we study the computational complexity of functions that have efficient card-based protocols. A study of card-based protocols was initiated by den Boer [den Boer, 1990] as a means for secure two-party computation. Our contribution is two-fold: We classify a large class of protocols with respect to the computational complexity of functions they compute, and we propose other encodings of inputs which require fewer cards than the usual 2-card representation.

Cite as

Pavel Dvořák and Michal Koucký. Barrington Plays Cards: The Complexity of Card-Based Protocols. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dvorak_et_al:LIPIcs.STACS.2021.26,
  author =	{Dvo\v{r}\'{a}k, Pavel and Kouck\'{y}, Michal},
  title =	{{Barrington Plays Cards: The Complexity of Card-Based Protocols}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.26},
  URN =		{urn:nbn:de:0030-drops-136715},
  doi =		{10.4230/LIPIcs.STACS.2021.26},
  annote =	{Keywords: Efficient card-based protocol, Branching program, Turing machine}
}
Document
Round-Competitive Algorithms for Uncertainty Problems with Parallel Queries

Authors: Thomas Erlebach, Michael Hoffmann, and Murilo Santos de Lima


Abstract
The area of computing with uncertainty considers problems where some information about the input elements is uncertain, but can be obtained using queries. For example, instead of the weight of an element, we may be given an interval that is guaranteed to contain the weight, and a query can be performed to reveal the weight. While previous work has considered models where queries are asked either sequentially (adaptive model) or all at once (non-adaptive model), and the goal is to minimize the number of queries that are needed to solve the given problem, we propose and study a new model where k queries can be made in parallel in each round, and the goal is to minimize the number of query rounds. We use competitive analysis and present upper and lower bounds on the number of query rounds required by any algorithm in comparison with the optimal number of query rounds. Given a set of uncertain elements and a family of m subsets of that set, we present an algorithm for determining the value of the minimum of each of the subsets that requires at most (2+ε) ⋅ opt_k+O(1/(ε) ⋅ lg m) rounds for every 0 < ε < 1, where opt_k is the optimal number of rounds, as well as nearly matching lower bounds. For the problem of determining the i-th smallest value and identifying all elements with that value in a set of uncertain elements, we give a 2-round-competitive algorithm. We also show that the problem of sorting a family of sets of uncertain elements admits a 2-round-competitive algorithm and this is the best possible.

Cite as

Thomas Erlebach, Michael Hoffmann, and Murilo Santos de Lima. Round-Competitive Algorithms for Uncertainty Problems with Parallel Queries. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{erlebach_et_al:LIPIcs.STACS.2021.27,
  author =	{Erlebach, Thomas and Hoffmann, Michael and de Lima, Murilo Santos},
  title =	{{Round-Competitive Algorithms for Uncertainty Problems with Parallel Queries}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{27:1--27:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.27},
  URN =		{urn:nbn:de:0030-drops-136728},
  doi =		{10.4230/LIPIcs.STACS.2021.27},
  annote =	{Keywords: online algorithms, competitive analysis, explorable uncertainty, parallel algorithms, minimum problem, selection problem}
}
Document
Church Synthesis on Register Automata over Linearly Ordered Data Domains

Authors: Léo Exibard, Emmanuel Filiot, and Ayrat Khalimov


Abstract
Register automata are finite automata equipped with a finite set of registers in which they can store data, i.e. elements from an unbounded or infinite alphabet. They provide a simple formalism to specify the behaviour of reactive systems operating over data ω-words. We study the synthesis problem for specifications given as register automata over a linearly ordered data domain (e.g. (ℕ, ≤) or (ℚ, ≤)), which allow for comparison of data with regards to the linear order. To that end, we extend the classical Church synthesis game to infinite alphabets: two players, Adam and Eve, alternately play some data, and Eve wins whenever their interaction complies with the specification, which is a language of ω-words over ordered data. Such games are however undecidable, even when the specification is recognised by a deterministic register automaton. This is in contrast with the equality case, where the problem is only undecidable for nondeterministic and universal specifications. Thus, we study one-sided Church games, where Eve instead operates over a finite alphabet, while Adam still manipulates data. We show they are determined, and deciding the existence of a winning strategy is in ExpTime, both for ℚ and ℕ. This follows from a study of constraint sequences, which abstract the behaviour of register automata, and allow us to reduce Church games to ω-regular games. Lastly, we apply these results to the transducer synthesis problem for input-driven register automata, where each output data is restricted to be the content of some register, and show that if there exists an implementation, then there exists one which is a register transducer.

Cite as

Léo Exibard, Emmanuel Filiot, and Ayrat Khalimov. Church Synthesis on Register Automata over Linearly Ordered Data Domains. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{exibard_et_al:LIPIcs.STACS.2021.28,
  author =	{Exibard, L\'{e}o and Filiot, Emmanuel and Khalimov, Ayrat},
  title =	{{Church Synthesis on Register Automata over Linearly Ordered Data Domains}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.28},
  URN =		{urn:nbn:de:0030-drops-136735},
  doi =		{10.4230/LIPIcs.STACS.2021.28},
  annote =	{Keywords: Synthesis, Church Game, Register Automata, Transducers, Ordered Data Words}
}
Document
A Faster Algorithm for Finding Tarski Fixed Points

Authors: John Fearnley and Rahul Savani


Abstract
Dang et al. have given an algorithm that can find a Tarski fixed point in a k-dimensional lattice of width n using O(log^k n) queries [Chuangyin Dang et al., 2020]. Multiple authors have conjectured that this algorithm is optimal [Chuangyin Dang et al., 2020; Kousha Etessami et al., 2020], and indeed this has been proven for two-dimensional instances [Kousha Etessami et al., 2020]. We show that these conjectures are false in dimension three or higher by giving an O(log² n) query algorithm for the three-dimensional Tarski problem, which generalises to give an O(log^{k-1} n) query algorithm for the k-dimensional problem when k ≥ 3.

Cite as

John Fearnley and Rahul Savani. A Faster Algorithm for Finding Tarski Fixed Points. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fearnley_et_al:LIPIcs.STACS.2021.29,
  author =	{Fearnley, John and Savani, Rahul},
  title =	{{A Faster Algorithm for Finding Tarski Fixed Points}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.29},
  URN =		{urn:nbn:de:0030-drops-136741},
  doi =		{10.4230/LIPIcs.STACS.2021.29},
  annote =	{Keywords: query complexity, Tarski fixed points, total function problem}
}
Document
Solving One Variable Word Equations in the Free Group in Cubic Time

Authors: Robert Ferens and Artur Jeż


Abstract
A word equation with one variable in a free group is given as U = V, where both U and V are words over the alphabet of generators of the free group and X, X⁻¹, for a fixed variable X. An element of the free group is a solution when substituting it for X yields a true equality (interpreted in the free group) of left- and right-hand sides. It is known that the set of all solutions of a given word equation with one variable is a finite union of sets of the form {α wⁱ β : i ∈ ℤ}, where α, w, β are reduced words over the alphabet of generators, and a polynomial-time algorithm (of a high degree) computing this set is known. We provide a cubic time algorithm for this problem, which also shows that the set of solutions consists of at most a quadratic number of the above-mentioned sets. The algorithm uses only simple tools of word combinatorics and group theory and is simple to state. Its analysis is involved and focuses on the combinatorics of occurrences of powers of a word within a larger word.

Cite as

Robert Ferens and Artur Jeż. Solving One Variable Word Equations in the Free Group in Cubic Time. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ferens_et_al:LIPIcs.STACS.2021.30,
  author =	{Ferens, Robert and Je\.{z}, Artur},
  title =	{{Solving One Variable Word Equations in the Free Group in Cubic Time}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.30},
  URN =		{urn:nbn:de:0030-drops-136751},
  doi =		{10.4230/LIPIcs.STACS.2021.30},
  annote =	{Keywords: Word equations, free group, one-variable equations}
}
Document
Diverse Collections in Matroids and Graphs

Authors: Fedor V. Fomin, Petr A. Golovach, Fahad Panolan, Geevarghese Philip, and Saket Saurabh


Abstract
We investigate the parameterized complexity of finding diverse sets of solutions to three fundamental combinatorial problems, two from the theory of matroids and the third from graph theory. The input to the Weighted Diverse Bases problem consists of a matroid M, a weight function ω:E(M)→N, and integers k ≥ 1, d ≥ 0. The task is to decide if there is a collection of k bases B_1, ..., B_k of M such that the weight of the symmetric difference of any pair of these bases is at least d. This is a diverse variant of the classical matroid base packing problem. The input to the Weighted Diverse Common Independent Sets problem consists of two matroids M₁,M₂ defined on the same ground set E, a weight function ω:E→N, and integers k ≥ 1, d ≥ 0. The task is to decide if there is a collection of k common independent sets I_1, ..., I_k of M₁ and M₂ such that the weight of the symmetric difference of any pair of these sets is at least d. This is motivated by the classical weighted matroid intersection problem. The input to the Diverse Perfect Matchings problem consists of a graph G and integers k ≥ 1, d ≥ 0. The task is to decide if G contains k perfect matchings M_1, ..., M_k such that the symmetric difference of any two of these matchings is at least d. The underlying problem of finding one solution (basis, common independent set, or perfect matching) is known to be doable in polynomial time for each of these problems, and Diverse Perfect Matchings is known to be NP-hard for k = 2. We show that Weighted Diverse Bases and Weighted Diverse Common Independent Sets are both NP-hard. We show also that Diverse Perfect Matchings cannot be solved in polynomial time (unless P=NP) even for the case d = 1. We derive fixed-parameter tractable (FPT) algorithms for all three problems with (k,d) as the parameter. The above results on matroids are derived under the assumption that the input matroids are given as independence oracles. For Weighted Diverse Bases we present a polynomial-time algorithm that takes a representation of the input matroid over a finite field and computes a poly(k,d)-sized kernel for the problem.

Cite as

Fedor V. Fomin, Petr A. Golovach, Fahad Panolan, Geevarghese Philip, and Saket Saurabh. Diverse Collections in Matroids and Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 31:1-31:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.STACS.2021.31,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Panolan, Fahad and Philip, Geevarghese and Saurabh, Saket},
  title =	{{Diverse Collections in Matroids and Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{31:1--31:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.31},
  URN =		{urn:nbn:de:0030-drops-136769},
  doi =		{10.4230/LIPIcs.STACS.2021.31},
  annote =	{Keywords: Matroids, Diverse solutions, Fixed-parameter tractable algorithms}
}
Document
Rice-Like Theorems for Automata Networks

Authors: Guilhem Gamard, Pierre Guillon, Kevin Perrot, and Guillaume Theyssier


Abstract
We prove general complexity lower bounds on automata networks, in the style of Rice’s theorem, but in the computable world. Our main result is that testing any fixed first-order property on the dynamics of an automata network is either trivial, or NP-hard, or coNP-hard. Moreover, there exist such properties that are arbitrarily high in the polynomial-time hierarchy. We also prove that testing a first-order property given as input on an automata network (also part of the input) is PSPACE-hard. Besides, we show that, under a natural effectiveness condition, any nontrivial property of the limit set of a nondeterministic network is PSPACE-hard. We also show that it is PSPACE-hard to separate deterministic networks with a very high and a very low number of limit configurations; however, the problem of deciding whether the number of limit configurations is maximal up to a polynomial quantity belongs to the polynomial-time hierarchy.

Cite as

Guilhem Gamard, Pierre Guillon, Kevin Perrot, and Guillaume Theyssier. Rice-Like Theorems for Automata Networks. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 32:1-32:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gamard_et_al:LIPIcs.STACS.2021.32,
  author =	{Gamard, Guilhem and Guillon, Pierre and Perrot, Kevin and Theyssier, Guillaume},
  title =	{{Rice-Like Theorems for Automata Networks}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{32:1--32:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.32},
  URN =		{urn:nbn:de:0030-drops-136770},
  doi =		{10.4230/LIPIcs.STACS.2021.32},
  annote =	{Keywords: Automata networks, Rice theorem, complexity classes, polynomial hierarchy, hardness}
}
Document
Auction Algorithms for Market Equilibrium with Weak Gross Substitute Demands and Their Applications

Authors: Jugal Garg, Edin Husić, and László A. Végh


Abstract
We consider the Arrow-Debreu exchange market model where agents' demands satisfy the weak gross substitutes (WGS) property. This is a well-studied property, in particular, it gives a sufficient condition for the convergence of the classical tâtonnement dynamics. In this paper, we present a simple auction algorithm that obtains an approximate market equilibrium for WGS demands. Such auction algorithms have been previously known for restricted classes of WGS demands only. As an application of our technique, we obtain an efficient algorithm to find an approximate spending-restricted market equilibrium for WGS demands, a model that has been recently introduced as a continuous relaxation of the Nash social welfare (NSW) problem. This leads to a polynomial-time constant factor approximation algorithm for NSW with budget separable piecewise linear utility functions; only a pseudopolynomial approximation algorithm was known for this setting previously.

Cite as

Jugal Garg, Edin Husić, and László A. Végh. Auction Algorithms for Market Equilibrium with Weak Gross Substitute Demands and Their Applications. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 33:1-33:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.STACS.2021.33,
  author =	{Garg, Jugal and Husi\'{c}, Edin and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{Auction Algorithms for Market Equilibrium with Weak Gross Substitute Demands and Their Applications}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{33:1--33:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.33},
  URN =		{urn:nbn:de:0030-drops-136786},
  doi =		{10.4230/LIPIcs.STACS.2021.33},
  annote =	{Keywords: auction algorithm, weak gross substitutes, Fisher equilibrium, Gale equilibrium, Nash social welfare}
}
Document
Efficiently Testing Simon’s Congruence

Authors: Paweł Gawrychowski, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer


Abstract
Simon’s congruence ∼_k is a relation on words defined by Imre Simon in the 1970s and intensely studied since then. This congruence was initially used in connection to piecewise testable languages, but also found many applications in, e.g., learning theory, databases theory, or linguistics. The ∼_k-relation is defined as follows: two words are ∼_k-congruent if they have the same set of subsequences of length at most k. A long standing open problem, stated already by Simon in his initial works on this topic, was to design an algorithm which computes, given two words s and t, the largest k for which s∼_k t. We propose the first algorithm solving this problem in linear time O(|s|+|t|) when the input words are over the integer alphabet {1,…,|s|+|t|} (or other alphabets which can be sorted in linear time). Our approach can be extended to an optimal algorithm in the case of general alphabets as well. To achieve these results, we introduce a novel data-structure, called Simon-Tree, which allows us to construct a natural representation of the equivalence classes induced by ∼_k on the set of suffixes of a word, for all k ≥ 1. We show that such a tree can be constructed for an input word in linear time. Then, when working with two words s and t, we compute their respective Simon-Trees and efficiently build a correspondence between the nodes of these trees. This correspondence, which can also be constructed in linear time O(|s|+|t|), allows us to retrieve the largest k for which s∼_k t.

Cite as

Paweł Gawrychowski, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer. Efficiently Testing Simon’s Congruence. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 34:1-34:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.STACS.2021.34,
  author =	{Gawrychowski, Pawe{\l} and Kosche, Maria and Ko{\ss}, Tore and Manea, Florin and Siemer, Stefan},
  title =	{{Efficiently Testing Simon’s Congruence}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{34:1--34:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.34},
  URN =		{urn:nbn:de:0030-drops-136796},
  doi =		{10.4230/LIPIcs.STACS.2021.34},
  annote =	{Keywords: Simon’s congruence, Subsequence, Scattered factor, Efficient algorithms}
}
Document
Finding an Optimal Alphabet Ordering for Lyndon Factorization Is Hard

Authors: Daniel Gibney and Sharma V. Thankachan


Abstract
This work establishes several strong hardness results on the problem of finding an ordering on a string’s alphabet that either minimizes or maximizes the number of factors in that string’s Lyndon factorization. In doing so, we demonstrate that these ordering problems are sufficiently complex to model a wide variety of ordering constraint satisfaction problems (OCSPs). Based on this, we prove that (i) the decision versions of both the minimization and maximization problems are NP-complete, (ii) for both the minimization and maximization problems there does not exist a constant approximation algorithm running in polynomial time under the Unique Game Conjecture and (iii) there does not exist an algorithm to solve the minimization problem in time poly(|T|) ⋅ 2^o(σlog σ) for a string T over an alphabet of size σ under the Exponential Time Hypothesis (essentially the brute force approach of trying every alphabet order is hard to improve significantly).

Cite as

Daniel Gibney and Sharma V. Thankachan. Finding an Optimal Alphabet Ordering for Lyndon Factorization Is Hard. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gibney_et_al:LIPIcs.STACS.2021.35,
  author =	{Gibney, Daniel and Thankachan, Sharma V.},
  title =	{{Finding an Optimal Alphabet Ordering for Lyndon Factorization Is Hard}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{35:1--35:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.35},
  URN =		{urn:nbn:de:0030-drops-136809},
  doi =		{10.4230/LIPIcs.STACS.2021.35},
  annote =	{Keywords: Lyndon Factorization, String Algorithms, Burrows-Wheeler Transform}
}
Document
Reachability in Two-Parametric Timed Automata with One Parameter Is EXPSPACE-Complete

Authors: Stefan Göller and Mathieu Hilaire


Abstract
Parametric timed automata (PTA) have been introduced by Alur, Henzinger, and Vardi as an extension of timed automata in which clocks can be compared against parameters. The reachability problem asks for the existence of an assignment of the parameters to the non-negative integers such that reachability holds in the underlying timed automaton. The reachability problem for PTA is long known to be undecidable, already over three parametric clocks. A few years ago, Bundala and Ouaknine proved that for PTA over two parametric clocks and one parameter the reachability problem is decidable and also showed a lower bound for the complexity class PSPACE^NEXP. Our main result is that the reachability problem for parametric timed automata over two parametric clocks and one parameter is EXPSPACE-complete. For the EXPSPACE lower bound we make use of deep results from complexity theory, namely a serializability characterization of EXPSPACE (in turn based on Barrington’s Theorem) and a logspace translation of numbers in Chinese Remainder Representation to binary representation due to Chiu, Davida, and Litow. It is shown that with small PTA over two parametric clocks and one parameter one can simulate serializability computations. For the EXPSPACE upper bound, we first give a careful exponential time reduction from PTA over two parametric clocks and one parameter to a (slight subclass of) parametric one-counter automata over one parameter based on a minor adjustment of a construction due to Bundala and Ouaknine. For solving the reachability problem for parametric one-counter automata with one parameter, we provide a series of techniques to partition a fictitious run into several carefully chosen subruns that allow us to prove that it is sufficient to consider a parameter value of exponential magnitude only. This allows us to show a doubly-exponential upper bound on the value of the only parameter of a PTA over two parametric clocks and one parameter. We hope that extensions of our techniques lead to finally establishing decidability of the long-standing open problem of reachability in parametric timed automata with two parametric clocks (and arbitrarily many parameters) and, if decidability holds, determining its precise computational complexity.

Cite as

Stefan Göller and Mathieu Hilaire. Reachability in Two-Parametric Timed Automata with One Parameter Is EXPSPACE-Complete. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 36:1-36:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goller_et_al:LIPIcs.STACS.2021.36,
  author =	{G\"{o}ller, Stefan and Hilaire, Mathieu},
  title =	{{Reachability in Two-Parametric Timed Automata with One Parameter Is EXPSPACE-Complete}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{36:1--36:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.36},
  URN =		{urn:nbn:de:0030-drops-136817},
  doi =		{10.4230/LIPIcs.STACS.2021.36},
  annote =	{Keywords: Parametric Timed Automata, Computational Complexity, Reachability}
}
Document
Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration

Authors: Petr A. Golovach, Christian Komusiewicz, Dieter Kratsch, and Van Bang Le


Abstract
An enumeration kernel as defined by Creignou et al. [Theory Comput. Syst. 2017] for a parameterized enumeration problem consists of an algorithm that transforms each instance into one whose size is bounded by the parameter plus a solution-lifting algorithm that efficiently enumerates all solutions from the set of the solutions of the kernel. We propose to consider two new versions of enumeration kernels by asking that the solutions of the original instance can be enumerated in polynomial time or with polynomial delay from the kernel solutions. Using the NP-hard Matching Cut problem parameterized by structural parameters such as the vertex cover number or the cyclomatic number of the input graph, we show that the new enumeration kernels present a useful notion of data reduction for enumeration problems which allows to compactly represent the set of feasible solutions.

Cite as

Petr A. Golovach, Christian Komusiewicz, Dieter Kratsch, and Van Bang Le. Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{golovach_et_al:LIPIcs.STACS.2021.37,
  author =	{Golovach, Petr A. and Komusiewicz, Christian and Kratsch, Dieter and Le, Van Bang},
  title =	{{Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{37:1--37:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.37},
  URN =		{urn:nbn:de:0030-drops-136823},
  doi =		{10.4230/LIPIcs.STACS.2021.37},
  annote =	{Keywords: enumeration problems, polynomial delay, output-sensitive algorithms, kernelization, structural parameterizations, matching cuts}
}
Document
Average-Case Algorithms for Testing Isomorphism of Polynomials, Algebras, and Multilinear Forms

Authors: Joshua A. Grochow, Youming Qiao, and Gang Tang


Abstract
We study the problems of testing isomorphism of polynomials, algebras, and multilinear forms. Our first main results are average-case algorithms for these problems. For example, we develop an algorithm that takes two cubic forms f, g ∈ 𝔽_q[x_1, … , x_n], and decides whether f and g are isomorphic in time q^O(n) for most f. This average-case setting has direct practical implications, having been studied in multivariate cryptography since the 1990s. Our second result concerns the complexity of testing equivalence of alternating trilinear forms. This problem is of interest in both mathematics and cryptography. We show that this problem is polynomial-time equivalent to testing equivalence of symmetric trilinear forms, by showing that they are both Tensor Isomorphism-complete (Grochow & Qiao, ITCS 2021), therefore is equivalent to testing isomorphism of cubic forms over most fields.

Cite as

Joshua A. Grochow, Youming Qiao, and Gang Tang. Average-Case Algorithms for Testing Isomorphism of Polynomials, Algebras, and Multilinear Forms. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 38:1-38:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grochow_et_al:LIPIcs.STACS.2021.38,
  author =	{Grochow, Joshua A. and Qiao, Youming and Tang, Gang},
  title =	{{Average-Case Algorithms for Testing Isomorphism of Polynomials, Algebras, and Multilinear Forms}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{38:1--38:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.38},
  URN =		{urn:nbn:de:0030-drops-136836},
  doi =		{10.4230/LIPIcs.STACS.2021.38},
  annote =	{Keywords: polynomial isomorphism, trilinear form equivalence, algebra isomorphism, average-case algorithms, tensor isomorphism complete, symmetric and alternating bilinear maps}
}
Document
Geometric Cover with Outliers Removal

Authors: Zhengyang Guo and Yi Li


Abstract
We study the problem of partial geometric cover, which asks to find the minimum number of geometric objects (unit squares and unit disks in this work) that cover at least (n-t) of n given planar points, where 0 ≤ t ≤ n/2. When t = 0, the problem is the classical geometric cover problem, for which many existing works adopt a general framework called the shifting strategy. The shifting strategy is a divide and conquer paradigm which partitions the plane into equal-width strips, applies a local algorithm on each strip and then merges the local solutions with only a small loss on the overall approximation ratio. A challenge to extend the shifting strategy to the case of outliers is to determine the number of outliers in each strip. We develop a shifting strategy incorporating the outlier distribution, which runs in O(tn log n) time. We also develop local algorithms on strips for the outliers case, improving the running time over previous algorithms, and consequently obtain approximation algorithms to the partial geometric cover.

Cite as

Zhengyang Guo and Yi Li. Geometric Cover with Outliers Removal. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 39:1-39:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.STACS.2021.39,
  author =	{Guo, Zhengyang and Li, Yi},
  title =	{{Geometric Cover with Outliers Removal}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{39:1--39:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.39},
  URN =		{urn:nbn:de:0030-drops-136849},
  doi =		{10.4230/LIPIcs.STACS.2021.39},
  annote =	{Keywords: Geometric Cover, Unit Square Cover, Unit Disk Cover, Shifting Strategy, Outliers Detection, Computational Geometry}
}
Document
Parameterised Counting in Logspace

Authors: Anselm Haak, Arne Meier, Om Prakash, and Raghavendra Rao B. V.


Abstract
Logarithmic space bounded complexity classes such as L and NL play a central role in space bounded computation. The study of counting versions of these complexity classes have lead to several interesting insights into the structure of computational problems such as computing the determinant and counting paths in directed acyclic graphs. Though parameterised complexity theory was initiated roughly three decades ago by Downey and Fellows, a satisfactory study of parameterised logarithmic space bounded computation was developed only in the last decade by Elberfeld, Stockhusen and Tantau (IPEC 2013, Algorithmica 2015). In this paper, we introduce a new framework for parameterised counting in logspace, inspired by the parameterised space bounded models developed by Elberfeld, Stockhusen and Tantau (IPEC 2013, Algorithmica 2015). They defined the operators para_W and para_β for parameterised space complexity classes by allowing bounded nondeterminism with multiple-read and read-once access, respectively. Using these operators, they characterised the parameterised complexity of natural problems on graphs. In the spirit of the operators para_W and para_β by Stockhusen and Tantau, we introduce variants based on tail-nondeterminism, para_{W[1]} and para_{βtail}. Then, we consider counting versions of all four operators applied to logspace and obtain several natural complete problems for the resulting classes: counting of paths in digraphs, counting first-order models for formulas, and counting graph homomorphisms. Furthermore, we show that the complexity of a parameterised variant of the determinant function for (0,1)-matrices is #para_{βtail} L-hard and can be written as the difference of two functions in #para_{βtail} L. These problems exhibit the richness of the introduced counting classes. Our results further indicate interesting structural characteristics of these classes. For example, we show that the closure of #para_{βtail} L under parameterised logspace parsimonious reductions coincides with #para_β L, that is, modulo parameterised reductions, tail-nondeterminism with read-once access is the same as read-once nondeterminism. Initiating the study of closure properties of these parameterised logspace counting classes, we show that all introduced classes are closed under addition and multiplication, and those without tail-nondeterminism are closed under parameterised logspace parsimonious reductions. Also, we show that the counting classes defined can naturally be characterised by parameterised variants of classes based on branching programs in analogy to the classical counting classes. Finally, we underline the significance of this topic by providing a promising outlook showing several open problems and options for further directions of research.

Cite as

Anselm Haak, Arne Meier, Om Prakash, and Raghavendra Rao B. V.. Parameterised Counting in Logspace. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 40:1-40:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haak_et_al:LIPIcs.STACS.2021.40,
  author =	{Haak, Anselm and Meier, Arne and Prakash, Om and Rao B. V., Raghavendra},
  title =	{{Parameterised Counting in Logspace}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{40:1--40:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.40},
  URN =		{urn:nbn:de:0030-drops-136859},
  doi =		{10.4230/LIPIcs.STACS.2021.40},
  annote =	{Keywords: Parameterized Complexity, Counting Complexity, Logspace}
}
Document
Digraph Coloring and Distance to Acyclicity

Authors: Ararat Harutyunyan, Michael Lampis, and Nikolaos Melissinos


Abstract
In k-Digraph Coloring we are given a digraph and are asked to partition its vertices into at most k sets, so that each set induces a DAG. This well-known problem is NP-hard, as it generalizes (undirected) k-Coloring, but becomes trivial if the input digraph is acyclic. This poses the natural parameterized complexity question of what happens when the input is "almost" acyclic. In this paper we study this question using parameters that measure the input’s distance to acyclicity in either the directed or the undirected sense. In the directed sense perhaps the most natural notion of distance to acyclicity is directed feedback vertex set (DFVS). It is already known that, for all k ≥ 2, k-Digraph Coloring is NP-hard on digraphs of DFVS at most k+4. We strengthen this result to show that, for all k ≥ 2, k-Digraph Coloring is already NP-hard for DFVS exactly k. This immediately provides a dichotomy, as k-Digraph Coloring is trivial if DFVS is at most k-1. Refining our reduction we obtain two further consequences: (i) for all k ≥ 2, k-Digraph Coloring is NP-hard for graphs of feedback arc set (FAS) at most k²; interestingly, this leads to a second dichotomy, as we show that the problem is FPT by k if FAS is at most k²-1; (ii) k-Digraph Coloring is NP-hard for graphs of DFVS k, even if the maximum degree Δ is at most 4k-1; we show that this is also almost tight, as the problem becomes FPT for DFVS k and Δ ≤ 4k-3. Since these results imply that the problem is also NP-hard on graphs of bounded directed treewidth, we then consider parameters that measure the distance from acyclicity of the underlying graph. On the positive side, we show that k-Digraph Coloring admits an FPT algorithm parameterized by treewidth, whose parameter dependence is (tw!)k^{tw}. Since this is considerably worse than the k^{tw} dependence of (undirected) k-Coloring, we pose the question of whether the tw! factor can be eliminated. Our main contribution in this part is to settle this question in the negative and show that our algorithm is essentially optimal, even for the much more restricted parameter treedepth and for k = 2. Specifically, we show that an FPT algorithm solving 2-Digraph Coloring with dependence td^o(td) would contradict the ETH.

Cite as

Ararat Harutyunyan, Michael Lampis, and Nikolaos Melissinos. Digraph Coloring and Distance to Acyclicity. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{harutyunyan_et_al:LIPIcs.STACS.2021.41,
  author =	{Harutyunyan, Ararat and Lampis, Michael and Melissinos, Nikolaos},
  title =	{{Digraph Coloring and Distance to Acyclicity}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.41},
  URN =		{urn:nbn:de:0030-drops-136865},
  doi =		{10.4230/LIPIcs.STACS.2021.41},
  annote =	{Keywords: Digraph Coloring, Dichromatic number, NP-completeness, Parameterized complexity, Feedback vertex and arc sets}
}
Document
Good r-Divisions Imply Optimal Amortized Decremental Biconnectivity

Authors: Jacob Holm and Eva Rotenberg


Abstract
We present a data structure that, given a graph G of n vertices and m edges, and a suitable pair of nested r-divisions of G, preprocesses G in O(m+n) time and handles any series of edge-deletions in O(m) total time while answering queries to pairwise biconnectivity in worst-case O(1) time. In case the vertices are not biconnected, the data structure can return a cutvertex separating them in worst-case O(1) time. As an immediate consequence, this gives optimal amortized decremental biconnectivity, 2-edge connectivity, and connectivity for large classes of graphs, including planar graphs and other minor free graphs.

Cite as

Jacob Holm and Eva Rotenberg. Good r-Divisions Imply Optimal Amortized Decremental Biconnectivity. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{holm_et_al:LIPIcs.STACS.2021.42,
  author =	{Holm, Jacob and Rotenberg, Eva},
  title =	{{Good r-Divisions Imply Optimal Amortized Decremental Biconnectivity}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.42},
  URN =		{urn:nbn:de:0030-drops-136875},
  doi =		{10.4230/LIPIcs.STACS.2021.42},
  annote =	{Keywords: Dynamic graphs, 2-connectivity, graph minors, r-divisions, graph separators}
}
Document
b-Coloring Parameterized by Clique-Width

Authors: Lars Jaffke, Paloma T. Lima, and Daniel Lokshtanov


Abstract
We provide a polynomial-time algorithm for b-Coloring on graphs of constant clique-width. This unifies and extends nearly all previously known polynomial-time results on graph classes, and answers open questions posed by Campos and Silva [Algorithmica, 2018] and Bonomo et al. [Graphs Combin., 2009]. This constitutes the first result concerning structural parameterizations of this problem. We show that the problem is FPT when parameterized by the vertex cover number on general graphs, and on chordal graphs when parameterized by the number of colors. Additionally, we observe that our algorithm for graphs of bounded clique-width can be adapted to solve the Fall Coloring problem within the same runtime bound. The running times of the clique-width based algorithms for b-Coloring and Fall Coloring are tight under the Exponential Time Hypothesis.

Cite as

Lars Jaffke, Paloma T. Lima, and Daniel Lokshtanov. b-Coloring Parameterized by Clique-Width. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jaffke_et_al:LIPIcs.STACS.2021.43,
  author =	{Jaffke, Lars and Lima, Paloma T. and Lokshtanov, Daniel},
  title =	{{b-Coloring Parameterized by Clique-Width}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.43},
  URN =		{urn:nbn:de:0030-drops-136881},
  doi =		{10.4230/LIPIcs.STACS.2021.43},
  annote =	{Keywords: b-Coloring, clique-width, vertex cover, structural parameterization}
}
Document
A Ramsey Theorem for Finite Monoids

Authors: Ismaël Jecker


Abstract
Repeated idempotent elements are commonly used to characterise iterable behaviours in abstract models of computation. Therefore, given a monoid M, it is natural to ask how long a sequence of elements of M needs to be to ensure the presence of consecutive idempotent factors. This question is formalised through the notion of the Ramsey function R_M associated to M, obtained by mapping every k ∈ ℕ to the minimal integer R_M(k) such that every word u ∈ M^* of length R_M(k) contains k consecutive non-empty factors that correspond to the same idempotent element of M. In this work, we study the behaviour of the Ramsey function R_M by investigating the regular 𝒟-length of M, defined as the largest size L(M) of a submonoid of M isomorphic to the set of natural numbers {1,2, …, L(M)} equipped with the max operation. We show that the regular 𝒟-length of M determines the degree of R_M, by proving that k^L(M) ≤ R_M(k) ≤ (k|M|⁴)^L(M). To allow applications of this result, we provide the value of the regular 𝒟-length of diverse monoids. In particular, we prove that the full monoid of n × n Boolean matrices, which is used to express transition monoids of non-deterministic automata, has a regular 𝒟-length of (n²+n+2)/2.

Cite as

Ismaël Jecker. A Ramsey Theorem for Finite Monoids. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 44:1-44:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jecker:LIPIcs.STACS.2021.44,
  author =	{Jecker, Isma\"{e}l},
  title =	{{A Ramsey Theorem for Finite Monoids}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{44:1--44:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.44},
  URN =		{urn:nbn:de:0030-drops-136890},
  doi =		{10.4230/LIPIcs.STACS.2021.44},
  annote =	{Keywords: Semigroup, monoid, idempotent, automaton}
}
Document
An Improved Sketching Algorithm for Edit Distance

Authors: Ce Jin, Jelani Nelson, and Kewen Wu


Abstract
We provide improved upper bounds for the simultaneous sketching complexity of edit distance. Consider two parties, Alice with input x ∈ Σⁿ and Bob with input y ∈ Σⁿ, that share public randomness and are given a promise that the edit distance ed(x,y) between their two strings is at most some given value k. Alice must send a message sx and Bob must send sy to a third party Charlie, who does not know the inputs but shares the same public randomness and also knows k. Charlie must output ed(x,y) precisely as well as a sequence of ed(x,y) edits required to transform x into y. The goal is to minimize the lengths |sx|, |sy| of the messages sent. The protocol of Belazzougui and Zhang (FOCS 2016), building upon the random walk method of Chakraborty, Goldenberg, and Koucký (STOC 2016), achieves a maximum message length of Õ(k⁸) bits, where Õ(⋅) hides poly(log n) factors. In this work we build upon Belazzougui and Zhang’s protocol and provide an improved analysis demonstrating that a slight modification of their construction achieves a bound of Õ(k³).

Cite as

Ce Jin, Jelani Nelson, and Kewen Wu. An Improved Sketching Algorithm for Edit Distance. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.STACS.2021.45,
  author =	{Jin, Ce and Nelson, Jelani and Wu, Kewen},
  title =	{{An Improved Sketching Algorithm for Edit Distance}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.45},
  URN =		{urn:nbn:de:0030-drops-136905},
  doi =		{10.4230/LIPIcs.STACS.2021.45},
  annote =	{Keywords: edit distance, sketching}
}
Document
Locality Sensitive Hashing for Efficient Similar Polygon Retrieval

Authors: Haim Kaplan and Jay Tenenbaum


Abstract
Locality Sensitive Hashing (LSH) is an effective method of indexing a set of items to support efficient nearest neighbors queries in high-dimensional spaces. The basic idea of LSH is that similar items should produce hash collisions with higher probability than dissimilar items. We study LSH for (not necessarily convex) polygons, and use it to give efficient data structures for similar shape retrieval. Arkin et al. [Arkin et al., 1991] represent polygons by their "turning function" - a function which follows the angle between the polygon’s tangent and the x-axis while traversing the perimeter of the polygon. They define the distance between polygons to be variations of the L_p (for p = 1,2) distance between their turning functions. This metric is invariant under translation, rotation and scaling (and the selection of the initial point on the perimeter) and therefore models well the intuitive notion of shape resemblance. We develop and analyze LSH near neighbor data structures for several variations of the L_p distance for functions (for p = 1,2). By applying our schemes to the turning functions of a collection of polygons we obtain efficient near neighbor LSH-based structures for polygons. To tune our structures to turning functions of polygons, we prove some new properties of these turning functions that may be of independent interest. As part of our analysis, we address the following problem which is of independent interest. Find the vertical translation of a function f that is closest in L₁ distance to a function g. We prove tight bounds on the approximation guarantee obtained by the translation which is equal to the difference between the averages of g and f.

Cite as

Haim Kaplan and Jay Tenenbaum. Locality Sensitive Hashing for Efficient Similar Polygon Retrieval. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.STACS.2021.46,
  author =	{Kaplan, Haim and Tenenbaum, Jay},
  title =	{{Locality Sensitive Hashing for Efficient Similar Polygon Retrieval}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.46},
  URN =		{urn:nbn:de:0030-drops-136910},
  doi =		{10.4230/LIPIcs.STACS.2021.46},
  annote =	{Keywords: Locality sensitive hashing, polygons, turning function, L\underlinep distance, nearest neighbors, similarity search}
}
Document
Binary Matrix Completion Under Diameter Constraints

Authors: Tomohiro Koana, Vincent Froese, and Rolf Niedermeier


Abstract
We thoroughly study a novel but basic combinatorial matrix completion problem: Given a binary incomplete matrix, fill in the missing entries so that the resulting matrix has a specified maximum diameter (that is, upper-bounding the maximum Hamming distance between any two rows of the completed matrix) as well as a specified minimum Hamming distance between any two of the matrix rows. This scenario is closely related to consensus string problems as well as to recently studied clustering problems on incomplete data. We obtain an almost complete picture concerning the complexity landscape (P vs NP) regarding the diameter constraints and regarding the number of missing entries per row of the incomplete matrix. We develop polynomial-time algorithms for maximum diameter three, which are based on Deza’s theorem [Discret. Math. 1973, J. Comb. Theory, Ser. B 1974] from extremal set theory. In this way, we also provide one of the rare links between sunflower techniques and stringology. On the negative side, we prove NP-hardness for diameter at least four. For the number of missing entries per row, we show polynomial-time solvability when there is only one missing entry and NP-hardness when there can be at least two missing entries. In general, our algorithms heavily rely on Deza’s theorem and the correspondingly identified sunflower structures pave the way towards solutions based on computing graph factors and solving 2-SAT instances.

Cite as

Tomohiro Koana, Vincent Froese, and Rolf Niedermeier. Binary Matrix Completion Under Diameter Constraints. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{koana_et_al:LIPIcs.STACS.2021.47,
  author =	{Koana, Tomohiro and Froese, Vincent and Niedermeier, Rolf},
  title =	{{Binary Matrix Completion Under Diameter Constraints}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.47},
  URN =		{urn:nbn:de:0030-drops-136925},
  doi =		{10.4230/LIPIcs.STACS.2021.47},
  annote =	{Keywords: sunflowers, binary matrices, Hamming distance, stringology, consensus problems, complexity dichotomy, combinatorial algorithms, graph factors, 2-Sat, Hamming radius}
}
Document
Absorbing Patterns in BST-Like Expression-Trees

Authors: Florent Koechlin and Pablo Rotondo


Abstract
In this article we study the effect of simple semantic reductions on random BST-like expression-trees. Such random unary-binary expression-trees are often used in benchmarks for model-checking tools. We consider the reduction induced by an absorbing pattern for some given operator ⊛, which we apply bottom-up, producing an equivalent (and smaller) tree-expression. Our main result concerns the expected size of a random tree, of given input size n → ∞, after reduction. We show that there are two different thresholds, leading to a total of five regimes, ranging from no significant reduction at all, to almost complete reduction. These regimes are completely characterized according to the probability of the absorbing operator. Our results prove that random BST-like trees have to be considered with care, and that they offer a richer range of behaviours than uniform random trees.

Cite as

Florent Koechlin and Pablo Rotondo. Absorbing Patterns in BST-Like Expression-Trees. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{koechlin_et_al:LIPIcs.STACS.2021.48,
  author =	{Koechlin, Florent and Rotondo, Pablo},
  title =	{{Absorbing Patterns in BST-Like Expression-Trees}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{48:1--48:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.48},
  URN =		{urn:nbn:de:0030-drops-136933},
  doi =		{10.4230/LIPIcs.STACS.2021.48},
  annote =	{Keywords: BST trees, absorbing pattern, reduction, analytic combinatorics}
}
Document
Cluster Editing Parameterized Above Modification-Disjoint P₃-Packings

Authors: Shaohua Li, Marcin Pilipczuk, and Manuel Sorge


Abstract
Given a graph G = (V,E) and an integer k, the Cluster Editing problem asks whether we can transform G into a union of vertex-disjoint cliques by at most k modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G = (V,E), a packing ℋ of modification-disjoint induced P₃s (no pair of P₃s in H share an edge or non-edge) and an integer 𝓁. The task is to decide whether G can be transformed into a union of vertex-disjoint cliques by at most 𝓁+|H| modifications (edge deletions or insertions). We show that this problem is NP-hard even when 𝓁 = 0 (in which case the problem asks to turn G into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H) and when each vertex is in at most 23 P₃s of the packing. This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by C. Komusiewicz at Shonan meeting no. 144 in March 2019. We then initiate the study to find the largest integer c such that the problem remains tractable when restricting to packings such that each vertex is in at most c packed P₃s. Van Bevern et al. showed that the case c = 1 is fixed-parameter tractable with respect to 𝓁 and we show that the case c = 2 is solvable in |V|^{2𝓁 + O(1)} time.

Cite as

Shaohua Li, Marcin Pilipczuk, and Manuel Sorge. Cluster Editing Parameterized Above Modification-Disjoint P₃-Packings. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 49:1-49:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.STACS.2021.49,
  author =	{Li, Shaohua and Pilipczuk, Marcin and Sorge, Manuel},
  title =	{{Cluster Editing Parameterized Above Modification-Disjoint P₃-Packings}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{49:1--49:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.49},
  URN =		{urn:nbn:de:0030-drops-136945},
  doi =		{10.4230/LIPIcs.STACS.2021.49},
  annote =	{Keywords: Graph algorithms, fixed-parameter tractability, parameterized complexity}
}
Document
Exploiting Dense Structures in Parameterized Complexity

Authors: William Lochet, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi


Abstract
Over the past few decades, the study of dense structures from the perspective of approximation algorithms has become a wide area of research. However, from the viewpoint of parameterized algorithm, this area is largely unexplored. In particular, properties of random samples have been successfully deployed to design approximation schemes for a number of fundamental problems on dense structures [Arora et al. FOCS 1995, Goldreich et al. FOCS 1996, Giotis and Guruswami SODA 2006, Karpinksi and Schudy STOC 2009]. In this paper, we fill this gap, and harness the power of random samples as well as structure theory to design kernelization as well as parameterized algorithms on dense structures. In particular, we obtain linear vertex kernels for Edge-Disjoint Paths, Edge Odd Cycle Transversal, Minimum Bisection, d-Way Cut, Multiway Cut and Multicut on everywhere dense graphs. In fact, these kernels are obtained by designing a polynomial-time algorithm when the corresponding parameter is at most Ω(n). Additionally, we obtain a cubic kernel for Vertex-Disjoint Paths on everywhere dense graphs. In addition to kernelization results, we obtain randomized subexponential-time parameterized algorithms for Edge Odd Cycle Transversal, Minimum Bisection, and d-Way Cut. Finally, we show how all of our results (as well as EPASes for these problems) can be de-randomized.

Cite as

William Lochet, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Exploiting Dense Structures in Parameterized Complexity. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 50:1-50:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lochet_et_al:LIPIcs.STACS.2021.50,
  author =	{Lochet, William and Lokshtanov, Daniel and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Exploiting Dense Structures in Parameterized Complexity}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{50:1--50:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.50},
  URN =		{urn:nbn:de:0030-drops-136950},
  doi =		{10.4230/LIPIcs.STACS.2021.50},
  annote =	{Keywords: Dense graphs, disjoint paths, odd cycle transversal, kernels}
}
Document
Subgroup Membership in GL(2,Z)

Authors: Markus Lohrey


Abstract
It is shown that the subgroup membership problem for a virtually free group can be decided in polynomial time where all group elements are represented by so-called power words, i.e., words of the form p_1^{z_1} p_2^{z_2} ⋯ p_k^{z_k}. Here the p_i are explicit words over the generating set of the group and all z_i are binary encoded integers. As a corollary, it follows that the subgroup membership problem for the matrix group GL(2,ℤ) can be decided in polynomial time when all matrix entries are given in binary notation.

Cite as

Markus Lohrey. Subgroup Membership in GL(2,Z). In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lohrey:LIPIcs.STACS.2021.51,
  author =	{Lohrey, Markus},
  title =	{{Subgroup Membership in GL(2,Z)}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.51},
  URN =		{urn:nbn:de:0030-drops-136961},
  doi =		{10.4230/LIPIcs.STACS.2021.51},
  annote =	{Keywords: free groups, virtually free groups, subgroup membership, matrix groups}
}
Document
Lower Bounds for Graph-Walking Automata

Authors: Olga Martynova and Alexander Okhotin


Abstract
Graph-walking automata (GWA) traverse graphs by moving between the nodes following the edges, using a finite-state control to decide where to go next. It is known that every GWA can be transformed to a GWA that halts on every input, to a GWA returning to the initial node in order to accept, as well as to a reversible GWA. This paper establishes lower bounds on the state blow-up of these transformations: it is shown that making an n-state GWA traversing k-ary graphs return to the initial node requires at least 2(n-1)(k-3) states in the worst case; the same lower bound holds for the transformation to halting automata. Automata satisfying both properties at once must have at least 4(n-1)(k-3) states. A reversible automaton must have at least 4(n-1)(k-3)-1 states. These bounds are asymptotically tight to the upper bounds proved using the methods from the literature.

Cite as

Olga Martynova and Alexander Okhotin. Lower Bounds for Graph-Walking Automata. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 52:1-52:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{martynova_et_al:LIPIcs.STACS.2021.52,
  author =	{Martynova, Olga and Okhotin, Alexander},
  title =	{{Lower Bounds for Graph-Walking Automata}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{52:1--52:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.52},
  URN =		{urn:nbn:de:0030-drops-136974},
  doi =		{10.4230/LIPIcs.STACS.2021.52},
  annote =	{Keywords: Finite automata, graph-walking automata, halting, reversibility}
}
Document
An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs

Authors: Meike Neuwohner


Abstract
In this paper, we consider the task of computing an independent set of maximum weight in a given d-claw free graph G = (V,E) equipped with a positive weight function w:V → ℝ^+. Thereby, d ≥ 2 is considered a constant. The previously best known approximation algorithm for this problem is the local improvement algorithm SquareImp proposed by Berman [Berman, 2000]. It achieves a performance ratio of d/2+ε in time 𝒪(|V(G)|^(d+1)⋅(|V(G)|+|E(G)|)⋅(d-1)²⋅ (d/(2ε)+1)²) for any ε > 0, which has remained unimproved for the last twenty years. By considering a broader class of local improvements, we obtain an approximation ratio of d/2-(1/63,700,992)+ε for any ε > 0 at the cost of an additional factor of 𝒪(|V(G)|^(d-1)²) in the running time. In particular, our result implies a polynomial time d/2-approximation algorithm. Furthermore, the well-known reduction from the weighted k-Set Packing Problem to the Maximum Weight Independent Set Problem in k+1-claw free graphs provides a (k+1)/2 -(1/63,700,992)+ε-approximation algorithm for the weighted k-Set Packing Problem for any ε > 0. This improves on the previously best known approximation guarantee of (k+1)/2 + ε originating from the result of Berman [Berman, 2000].

Cite as

Meike Neuwohner. An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{neuwohner:LIPIcs.STACS.2021.53,
  author =	{Neuwohner, Meike},
  title =	{{An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.53},
  URN =		{urn:nbn:de:0030-drops-136982},
  doi =		{10.4230/LIPIcs.STACS.2021.53},
  annote =	{Keywords: d-Claw free Graphs, independent Set, local Improvement, k-Set Packing, weighted}
}
Document
Complexity of the List Homomorphism Problem in Hereditary Graph Classes

Authors: Karolina Okrasa and Paweł Rzążewski


Abstract
A homomorphism from a graph G to a graph H is an edge-preserving mapping from V(G) to V(H). For a fixed graph H, in the list homomorphism problem, denoted by LHom(H), we are given a graph G, whose every vertex v is equipped with a list L(v) ⊆ V(H). We ask if there exists a homomorphism f from G to H, in which f(v) ∈ L(v) for every v ∈ V(G). Feder, Hell, and Huang [JGT 2003] proved that LHom(H) is polynomial time-solvable if H is a so-called bi-arc-graph, and NP-complete otherwise. We are interested in the complexity of the LHom(H) problem in F-free graphs, i.e., graphs excluding a copy of some fixed graph F as an induced subgraph. It is known that if F is connected and is not a path nor a subdivided claw, then for every non-bi-arc graph the LHom(H) problem is NP-complete and cannot be solved in subexponential time, unless the ETH fails. We consider the remaining cases for connected graphs F. If F is a path, we exhibit a full dichotomy. We define a class called predacious graphs and show that if H is not predacious, then for every fixed t the LHom(H) problem can be solved in quasi-polynomial time in P_t-free graphs. On the other hand, if H is predacious, then there exists t, such that the existence of a subexponential-time algorithm for LHom(H) in P_t-free graphs would violate the ETH. If F is a subdivided claw, we show a full dichotomy in two important cases: for H being irreflexive (i.e., with no loops), and for H being reflexive (i.e., where every vertex has a loop). Unless the ETH fails, for irreflexive H the LHom(H) problem can be solved in subexponential time in graphs excluding a fixed subdivided claw if and only if H is non-predacious and triangle-free. On the other hand, if H is reflexive, then LHom(H) cannot be solved in subexponential time whenever H is not a bi-arc graph.

Cite as

Karolina Okrasa and Paweł Rzążewski. Complexity of the List Homomorphism Problem in Hereditary Graph Classes. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{okrasa_et_al:LIPIcs.STACS.2021.54,
  author =	{Okrasa, Karolina and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Complexity of the List Homomorphism Problem in Hereditary Graph Classes}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.54},
  URN =		{urn:nbn:de:0030-drops-136990},
  doi =		{10.4230/LIPIcs.STACS.2021.54},
  annote =	{Keywords: list homomorphism, fine-grained complexity, hereditary graph classes}
}
Document
Spectrum Preserving Short Cycle Removal on Regular Graphs

Authors: Pedro Paredes


Abstract
We describe a new method to remove short cycles on regular graphs while maintaining spectral bounds (the nontrivial eigenvalues of the adjacency matrix), as long as the graphs have certain combinatorial properties. These combinatorial properties are related to the number and distance between short cycles and are known to happen with high probability in uniformly random regular graphs. Using this method we can show two results involving high girth spectral expander graphs. First, we show that given d ⩾ 3 and n, there exists an explicit distribution of d-regular Θ(n)-vertex graphs where with high probability its samples have girth Ω(log_{d-1} n) and are ε-near-Ramanujan; i.e., its eigenvalues are bounded in magnitude by 2√{d-1} + ε (excluding the single trivial eigenvalue of d). Then, for every constant d ⩾ 3 and ε > 0, we give a deterministic poly(n)-time algorithm that outputs a d-regular graph on Θ(n)-vertices that is ε-near-Ramanujan and has girth Ω(√{log n}), based on the work of [Mohanty et al., 2020].

Cite as

Pedro Paredes. Spectrum Preserving Short Cycle Removal on Regular Graphs. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 55:1-55:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{paredes:LIPIcs.STACS.2021.55,
  author =	{Paredes, Pedro},
  title =	{{Spectrum Preserving Short Cycle Removal on Regular Graphs}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{55:1--55:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.55},
  URN =		{urn:nbn:de:0030-drops-137000},
  doi =		{10.4230/LIPIcs.STACS.2021.55},
  annote =	{Keywords: Ramanujan Graphs, High Girth Regular Graphs}
}
Document
Fine-Grained Complexity of the List Homomorphism Problem: Feedback Vertex Set and Cutwidth

Authors: Marta Piecyk and Paweł Rzążewski


Abstract
For graphs G,H, a homomorphism from G to H is an edge-preserving mapping from V(G) to V(H). In the list homomorphism problem, denoted by LHom(H), we are given a graph G, whose every vertex v is equipped with a list L(v) ⊆ V(H), and we need to determine whether there exists a homomorphism from G to H which additionally respects the lists L. List homomorphisms are a natural generalization of (list) colorings. Very recently Okrasa, Piecyk, and Rzążewski [ESA 2020] studied the fine-grained complexity of the problem, parameterized by the treewidth of the instance graph G. They defined a new invariant i^*(H), and proved that for every relevant graph H, i.e., such that LHom(H) is NP-hard, this invariant is the correct base of the exponent in the running time of any algorithm solving the LHom(H) problem. In this paper we continue this direction and study the complexity of the problem under different parameterizations. As the first result, we show that i^*(H) is also the right complexity base if the parameter is the size of a minimum feedback vertex set of G, denoted by fvs(G). In particular, for every relevant graph H, the LHom(H) problem - can be solved in time i^*(H)^fvs(G) ⋅ |V(G)|^𝒪(1), if a minimum feedback vertex set of G is given, - cannot be solved in time (i^*(H) - ε)^fvs(G) ⋅ |V(G)|^𝒪(1), for any ε > 0, unless the SETH fails. Then we turn our attention to a parameterization by the cutwidth ctw(G) of G. Jansen and Nederlof [TCS 2019] showed that List k-Coloring (i.e., LHom(K_k)) can be solved in time c^ctw(G) ⋅ |V(G)|^𝒪(1) for an absolute constant c, i.e., the base of the exponential function does not depend on the number of colors. Jansen asked whether this behavior extends to graph homomorphisms. As the main result of the paper, we answer the question in the negative. We define a new graph invariant mim^*(H), closely related to the size of a maximum induced matching in H, and prove that for all relevant graphs H, the LHom(H) problem cannot be solved in time (mim^*(H)-ε)^{ctw(G)}⋅ |V(G)|^𝒪(1) for any ε > 0, unless the SETH fails. In particular, this implies that, assuming the SETH, there is no constant c, such that for every odd cycle the non-list version of the problem can be solved in time c^ctw(G) ⋅ |V(G)|^𝒪(1).

Cite as

Marta Piecyk and Paweł Rzążewski. Fine-Grained Complexity of the List Homomorphism Problem: Feedback Vertex Set and Cutwidth. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 56:1-56:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{piecyk_et_al:LIPIcs.STACS.2021.56,
  author =	{Piecyk, Marta and Rz\k{a}\.{z}ewski, Pawe{\l}},
  title =	{{Fine-Grained Complexity of the List Homomorphism Problem: Feedback Vertex Set and Cutwidth}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{56:1--56:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.56},
  URN =		{urn:nbn:de:0030-drops-137012},
  doi =		{10.4230/LIPIcs.STACS.2021.56},
  annote =	{Keywords: list homomorphisms, fine-grained complexity, SETH, feedback vertex set, cutwidth}
}
Document
6-Uniform Maker-Breaker Game Is PSPACE-Complete

Authors: Md Lutfar Rahman and Thomas Watson


Abstract
In a STOC 1976 paper, Schaefer proved that it is PSPACE-complete to determine the winner of the so-called Maker-Breaker game on a given set system, even when every set has size at most 11. Since then, there has been no improvement on this result. We prove that the game remains PSPACE-complete even when every set has size 6.

Cite as

Md Lutfar Rahman and Thomas Watson. 6-Uniform Maker-Breaker Game Is PSPACE-Complete. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 57:1-57:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rahman_et_al:LIPIcs.STACS.2021.57,
  author =	{Rahman, Md Lutfar and Watson, Thomas},
  title =	{{6-Uniform Maker-Breaker Game Is PSPACE-Complete}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{57:1--57:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.57},
  URN =		{urn:nbn:de:0030-drops-137020},
  doi =		{10.4230/LIPIcs.STACS.2021.57},
  annote =	{Keywords: Game, Maker-Breaker, Complexity, Reduction, PSPACE-complete, NL-hard}
}
Document
Resolution with Symmetry Rule Applied to Linear Equations

Authors: Pascal Schweitzer and Constantin Seebach


Abstract
This paper considers the length of resolution proofs when using Krishnamurthy’s classic symmetry rules. We show that inconsistent linear equation systems of bounded width over a fixed finite field 𝔽_p with p a prime have, in their standard encoding as CNFs, polynomial length resolutions when using the local symmetry rule (SRC-II). As a consequence it follows that the multipede instances for the graph isomorphism problem encoded as CNF formula have polynomial length resolution proofs. This contrasts exponential lower bounds for individualization-refinement algorithms on these graphs. For the Cai-Fürer-Immerman graphs, for which Torán showed exponential lower bounds for resolution proofs (SAT 2013), we also show that already the global symmetry rule (SRC-I) suffices to allow for polynomial length proofs.

Cite as

Pascal Schweitzer and Constantin Seebach. Resolution with Symmetry Rule Applied to Linear Equations. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 58:1-58:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{schweitzer_et_al:LIPIcs.STACS.2021.58,
  author =	{Schweitzer, Pascal and Seebach, Constantin},
  title =	{{Resolution with Symmetry Rule Applied to Linear Equations}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{58:1--58:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.58},
  URN =		{urn:nbn:de:0030-drops-137038},
  doi =		{10.4230/LIPIcs.STACS.2021.58},
  annote =	{Keywords: Logical Resolution, Symmetry Rule, Linear Equation System}
}
Document
Quantum Approximate Counting with Nonadaptive Grover Iterations

Authors: Ramgopal Venkateswaran and Ryan O'Donnell


Abstract
Approximate Counting refers to the problem where we are given query access to a function f : [N] → {0,1}, and we wish to estimate K = #{x : f(x) = 1} to within a factor of 1+ε (with high probability), while minimizing the number of queries. In the quantum setting, Approximate Counting can be done with O(min (√{N/ε}, √{N/K} / ε) queries. It has recently been shown that this can be achieved by a simple algorithm that only uses "Grover iterations"; however the algorithm performs these iterations adaptively. Motivated by concerns of computational simplicity, we consider algorithms that use Grover iterations with limited adaptivity. We show that algorithms using only nonadaptive Grover iterations can achieve O(√{N/ε}) query complexity, which is tight.

Cite as

Ramgopal Venkateswaran and Ryan O'Donnell. Quantum Approximate Counting with Nonadaptive Grover Iterations. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 59:1-59:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{venkateswaran_et_al:LIPIcs.STACS.2021.59,
  author =	{Venkateswaran, Ramgopal and O'Donnell, Ryan},
  title =	{{Quantum Approximate Counting with Nonadaptive Grover Iterations}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{59:1--59:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.59},
  URN =		{urn:nbn:de:0030-drops-137048},
  doi =		{10.4230/LIPIcs.STACS.2021.59},
  annote =	{Keywords: quantum approximate counting, Grover search}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail