LIPIcs, Volume 274

31st Annual European Symposium on Algorithms (ESA 2023)



Thumbnail PDF

Event

ESA 2023, September 4-6, 2023, Amsterdam, The Netherlands

Editors

Inge Li Gørtz
  • Technical University of Denmark, Lyngby, Denmark
Martin Farach-Colton
  • Rutgers University, New Brunswick, NJ, USA
Simon J. Puglisi
  • University of Helsinki, Finland
Grzegorz Herman
  • Jagiellonian University, Kraków, Poland

Publication Details

  • published at: 2023-08-30
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-295-2
  • DBLP: db/conf/esa/esa2023

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volume

Authors: Inge Li Gørtz, Martin Farach-Colton, Simon J. Puglisi, and Grzegorz Herman


Abstract
LIPIcs, Volume 274, ESA 2023, Complete Volume

Cite as

31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 1-1700, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Proceedings{grtz_et_al:LIPIcs.ESA.2023,
  title =	{{LIPIcs, Volume 274, ESA 2023, Complete Volume}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{1--1700},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023},
  URN =		{urn:nbn:de:0030-drops-186529},
  doi =		{10.4230/LIPIcs.ESA.2023},
  annote =	{Keywords: LIPIcs, Volume 274, ESA 2023, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Inge Li Gørtz, Martin Farach-Colton, Simon J. Puglisi, and Grzegorz Herman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 0:i-0:xxii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{grtz_et_al:LIPIcs.ESA.2023.0,
  author =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{0:i--0:xxii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.0},
  URN =		{urn:nbn:de:0030-drops-186535},
  doi =		{10.4230/LIPIcs.ESA.2023.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
On Hashing by (Random) Equations (Invited Talk)

Authors: Martin Dietzfelbinger


Abstract
The talk will consider aspects of the following setup: Assume for each (key) x from a set 𝒰 (the universe) a vector a_x = (a_{x,0},… ,a_{x,{m-1}}) has been chosen. Given a list Z = (z_i)_{i ∈ [m]} of vectors in {0,1}^r we obtain a mapping φ_Z: 𝒰 → {0,1}^r, x ↦ ⟨a_x,Z⟩ := ⨁_{i ∈ [m]} a_{x,i} ⋅ z_i, where ⨁ is bitwise XOR. The simplest way for creating a data structure for calculating φ_Z is to store Z in an array Z[0..m-1] and answer a query for x by returning ⟨ a_x,Z⟩. The length m of the array should be (1+ε)n for some small ε, and calculating this inner product should be fast. In the focus of the talk is the case where for all or for most of the sets S ⊆ 𝒰 of a certain size n the vectors a_x, x ∈ S, are linearly independent. Choosing Z at random will lead to hash families of various degrees of independence. We will be mostly interested in the case where a_x, x ∈ 𝒰 are chosen independently at random from {0,1}^m, according to some distribution 𝒟. We wish to construct (static) retrieval data structures, which means that S ⊂ 𝒰 and some mapping f: S → {0,1}^r are given, and the task is to find Z such that the restriction of φ_Z to S is f. For creating such a data structure it is necessary to solve the linear system (a_x)_{x ∈ S} ⋅ Z = (f(x))_{x ∈ S} for Z. Two problems are central: Under what conditions on m and 𝒟 can we expect the vectors a_x, x ∈ S to be linearly independent, and how can we arrange things so that in this case the system can be solved fast, in particular in time close to linear (in n, assuming a reasonable machine model)? Solutions to these problems, some classical and others that have emerged only in recent years, will be discussed.

Cite as

Martin Dietzfelbinger. On Hashing by (Random) Equations (Invited Talk). In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dietzfelbinger:LIPIcs.ESA.2023.1,
  author =	{Dietzfelbinger, Martin},
  title =	{{On Hashing by (Random) Equations}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.1},
  URN =		{urn:nbn:de:0030-drops-186545},
  doi =		{10.4230/LIPIcs.ESA.2023.1},
  annote =	{Keywords: Hashing, Retrieval, Linear equations, Randomness}
}
Document
On Diameter Approximation in Directed Graphs

Authors: Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams


Abstract
Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In directed graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since d(u,v) may not be the same as d(v,u), there are multiple ways to define the problem, the two most natural being the (one-way) diameter (max_(u,v) d(u,v)) and the roundtrip diameter (max_{u,v} d(u,v)+d(v,u)). In this paper we make progress on the outstanding open question for each of them. - We design the first algorithm for diameter in sparse directed graphs to achieve n^{1.5-ε} time with an approximation factor better than 2. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. - We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a 1.5-approximation in subquadratic time would refute the All-Nodes k-Cycle hypothesis, and any (2-ε)-approximation would imply a breakthrough algorithm for approximate 𝓁_∞-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.

Cite as

Amir Abboud, Mina Dalirrooyfard, Ray Li, and Virginia Vassilevska Williams. On Diameter Approximation in Directed Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.2,
  author =	{Abboud, Amir and Dalirrooyfard, Mina and Li, Ray and Vassilevska Williams, Virginia},
  title =	{{On Diameter Approximation in Directed Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.2},
  URN =		{urn:nbn:de:0030-drops-186552},
  doi =		{10.4230/LIPIcs.ESA.2023.2},
  annote =	{Keywords: Diameter, Directed Graphs, Approximation Algorithms, Fine-grained complexity}
}
Document
Can You Solve Closest String Faster Than Exhaustive Search?

Authors: Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier


Abstract
We study the fundamental problem of finding the best string to represent a given set, in the form of the Closest String problem: Given a set X ⊆ Σ^d of n strings, find the string x^* minimizing the radius of the smallest Hamming ball around x^* that encloses all the strings in X. In this paper, we investigate whether the Closest String problem admits algorithms that are faster than the trivial exhaustive search algorithm. We obtain the following results for the two natural versions of the problem: - In the continuous Closest String problem, the goal is to find the solution string x^* anywhere in Σ^d. For binary strings, the exhaustive search algorithm runs in time O(2^d poly(nd)) and we prove that it cannot be improved to time O(2^{(1-ε) d} poly(nd)), for any ε > 0, unless the Strong Exponential Time Hypothesis fails. - In the discrete Closest String problem, x^* is required to be in the input set X. While this problem is clearly in polynomial time, its fine-grained complexity has been pinpointed to be quadratic time n^{2 ± o(1)} whenever the dimension is ω(log n) < d < n^o(1). We complement this known hardness result with new algorithms, proving essentially that whenever d falls out of this hard range, the discrete Closest String problem can be solved faster than exhaustive search. In the small-d regime, our algorithm is based on a novel application of the inclusion-exclusion principle. Interestingly, all of our results apply (and some are even stronger) to the natural dual of the Closest String problem, called the Remotest String problem, where the task is to find a string maximizing the Hamming distance to all the strings in X.

Cite as

Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier. Can You Solve Closest String Faster Than Exhaustive Search?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.3,
  author =	{Abboud, Amir and Fischer, Nick and Goldenberg, Elazar and Karthik C. S. and Safier, Ron},
  title =	{{Can You Solve Closest String Faster Than Exhaustive Search?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.3},
  URN =		{urn:nbn:de:0030-drops-186566},
  doi =		{10.4230/LIPIcs.ESA.2023.3},
  annote =	{Keywords: Closest string, fine-grained complexity, SETH, inclusion-exclusion}
}
Document
What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?

Authors: Amir Abboud, Shay Mozes, and Oren Weimann


Abstract
The Voronoi diagrams technique, introduced by Cabello [SODA'17] to compute the diameter of planar graphs in subquadratic time, has revolutionized the field of distance computations in planar graphs. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. - In the static case, we give n^{3+o(1)}/D² and Õ(n⋅D²) time algorithms for computing the diameter of a planar graph G with diameter D. These are faster than the state of the art Õ(n^{5/3}) [SODA'18] when D < n^{1/3} or D > n^{2/3}. - In the fault-tolerant setting, we give an n^{7/3+o(1)} time algorithm for computing the diameter of G⧵ {e} for every edge e in G (the replacement diameter problem). This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm for every edge. - In the incremental setting, where we wish to maintain the diameter while adding edges, we present an algorithm with total running time n^{7/3+o(1)}. This should be compared with the naive Õ(n^{8/3}) time algorithm that runs the static algorithm after every update. - We give a lower bound (conditioned on the SETH) ruling out an amortized O(n^{1-ε}) update time for maintaining the diameter in weighted planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of r-division), and a new reduction from incremental diameter to incremental distance oracles that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH.

Cite as

Amir Abboud, Shay Mozes, and Oren Weimann. What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.4,
  author =	{Abboud, Amir and Mozes, Shay and Weimann, Oren},
  title =	{{What Else Can Voronoi Diagrams Do for Diameter in Planar Graphs?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.4},
  URN =		{urn:nbn:de:0030-drops-186575},
  doi =		{10.4230/LIPIcs.ESA.2023.4},
  annote =	{Keywords: Planar graphs, diameter, dynamic graphs, fault tolerance}
}
Document
Smooth Distance Approximation

Authors: Ahmed Abdelkader and David M. Mount


Abstract
Traditional problems in computational geometry involve aspects that are both discrete and continuous. One such example is nearest-neighbor searching, where the input is discrete, but the result depends on distances, which vary continuously. In many real-world applications of geometric data structures, it is assumed that query results are continuous, free of jump discontinuities. This is at odds with many modern data structures in computational geometry, which employ approximations to achieve efficiency, but these approximations often suffer from discontinuities. In this paper, we present a general method for transforming an approximate but discontinuous data structure into one that produces a smooth approximation, while matching the asymptotic space efficiencies of the original. We achieve this by adapting an approach called the partition-of-unity method, which smoothly blends multiple local approximations into a single smooth global approximation. We illustrate the use of this technique in a specific application of approximating the distance to the boundary of a convex polytope in ℝ^d from any point in its interior. We begin by developing a novel data structure that efficiently computes an absolute ε-approximation to this query in time O(log (1/ε)) using O(1/ε^{d/2}) storage space. Then, we proceed to apply the proposed partition-of-unity blending to guarantee the smoothness of the approximate distance field, establishing optimal asymptotic bounds on the norms of its gradient and Hessian.

Cite as

Ahmed Abdelkader and David M. Mount. Smooth Distance Approximation. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abdelkader_et_al:LIPIcs.ESA.2023.5,
  author =	{Abdelkader, Ahmed and Mount, David M.},
  title =	{{Smooth Distance Approximation}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.5},
  URN =		{urn:nbn:de:0030-drops-186589},
  doi =		{10.4230/LIPIcs.ESA.2023.5},
  annote =	{Keywords: Approximation algorithms, convexity, continuity, partition of unity}
}
Document
Reconfiguration of Polygonal Subdivisions via Recombination

Authors: Hugo A. Akitaya, Andrei Gonczi, Diane L. Souvaine, Csaba D. Tóth, and Thomas Weighill


Abstract
Motivated by the problem of redistricting, we study area-preserving reconfigurations of connected subdivisions of a simple polygon. A connected subdivision of a polygon ℛ, called a district map, is a set of interior disjoint connected polygons called districts whose union equals ℛ. We consider the recombination as the reconfiguration move which takes a subdivision and produces another by merging two adjacent districts, and by splitting them into two connected polygons of the same area as the original districts. The complexity of a map is the number of vertices in the boundaries of its districts. Given two maps with k districts, with complexity O(n), and a perfect matching between districts of the same area in the two maps, we show constructively that (log n)^O(log k) recombination moves are sufficient to reconfigure one into the other. We also show that Ω(log n) recombination moves are sometimes necessary even when k = 3, thus providing a tight bound when k = 3.

Cite as

Hugo A. Akitaya, Andrei Gonczi, Diane L. Souvaine, Csaba D. Tóth, and Thomas Weighill. Reconfiguration of Polygonal Subdivisions via Recombination. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 6:1-6:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{a.akitaya_et_al:LIPIcs.ESA.2023.6,
  author =	{A. Akitaya, Hugo and Gonczi, Andrei and Souvaine, Diane L. and T\'{o}th, Csaba D. and Weighill, Thomas},
  title =	{{Reconfiguration of Polygonal Subdivisions via Recombination}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{6:1--6:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.6},
  URN =		{urn:nbn:de:0030-drops-186598},
  doi =		{10.4230/LIPIcs.ESA.2023.6},
  annote =	{Keywords: configuration space, gerrymandering, polygonal subdivision, recombination}
}
Document
Faster Detours in Undirected Graphs

Authors: Shyan Akmal, Virginia Vassilevska Williams, Ryan Williams, and Zixuan Xu


Abstract
The k-Detour problem is a basic path-finding problem: given a graph G on n vertices, with specified nodes s and t, and a positive integer k, the goal is to determine if G has an st-path of length exactly dist(s,t) + k, where dist(s,t) is the length of a shortest path from s to t. The k-Detour problem is NP-hard when k is part of the input, so researchers have sought efficient parameterized algorithms for this task, running in f(k)poly(n) time, for f(⋅) as slow-growing as possible. We present faster algorithms for k-Detour in undirected graphs, running in 1.853^k poly(n) randomized and 4.082^kpoly(n) deterministic time. The previous fastest algorithms for this problem took 2.746^k poly(n) randomized and 6.523^k poly(n) deterministic time [Bezáková-Curticapean-Dell-Fomin, ICALP 2017]. Our algorithms use the fact that detecting a path of a given length in an undirected graph is easier if we are promised that the path belongs to what we call a "bipartitioned" subgraph, where the nodes are split into two parts and the path must satisfy constraints on those parts. Previously, this idea was used to obtain the fastest known algorithm for finding paths of length k in undirected graphs [Björklund-Husfeldt-Kaski-Koivisto, JCSS 2017], intuitively by looking for paths of length k in randomly bipartitioned subgraphs. Our algorithms for k-Detour stem from a new application of this idea, which does not involve choosing the bipartitioned subgraphs randomly. Our work has direct implications for the k-Longest Detour problem, another related path-finding problem. In this problem, we are given the same input as in k-Detour, but are now tasked with determining if G has an st-path of length at least dist(s,t)+k. Our results for k-Detour imply that we can solve k-Longest Detour in 3.432^k poly(n) randomized and 16.661^k poly(n) deterministic time. The previous fastest algorithms for this problem took 7.539^k poly(n) randomized and 42.549^k poly(n) deterministic time [Fomin et al., STACS 2022].

Cite as

Shyan Akmal, Virginia Vassilevska Williams, Ryan Williams, and Zixuan Xu. Faster Detours in Undirected Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ESA.2023.7,
  author =	{Akmal, Shyan and Vassilevska Williams, Virginia and Williams, Ryan and Xu, Zixuan},
  title =	{{Faster Detours in Undirected Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.7},
  URN =		{urn:nbn:de:0030-drops-186601},
  doi =		{10.4230/LIPIcs.ESA.2023.7},
  annote =	{Keywords: path finding, detours, parameterized complexity, exact algorithms}
}
Document
A Local-To-Global Theorem for Congested Shortest Paths

Authors: Shyan Akmal and Nicole Wein


Abstract
Amiri and Wargalla proved the following local-to-global theorem about shortest paths in directed acyclic graphs (DAGs): if G is a weighted DAG with the property that for each subset S of 3 nodes there is a shortest path containing every node in S, then there exists a pair (s,t) of nodes such that there is a shortest st-path containing every node in G. We extend this theorem to general graphs. For undirected graphs, we prove that the same theorem holds (up to a difference in the constant 3). For directed graphs, we provide a counterexample to the theorem (for any constant). However, we prove a roundtrip analogue of the theorem which guarantees there exists a pair (s,t) of nodes such that every node in G is contained in the union of a shortest st-path and a shortest ts-path. The original local-to-global theorem for DAGs has an application to the k-Shortest Paths with Congestion c ((k,c)-SPC) problem. In this problem, we are given a weighted graph G, together with k node pairs (s_1,t_1),… ,(s_k,t_k), and a positive integer c ≤ k, and tasked with finding a collection of paths P_1,… , P_k such that each P_i is a shortest path from s_i to t_i, and every node in the graph is on at most c paths P_i, or reporting that no such collection of paths exists. When c = k, there are no congestion constraints, and the problem can be solved easily by running a shortest path algorithm for each pair (s_i,t_i) independently. At the other extreme, when c = 1, the (k,c)-SPC problem is equivalent to the k-Disjoint Shortest Paths (k-DSP) problem, where the collection of shortest paths must be node-disjoint. For fixed k, k-DSP is polynomial-time solvable on DAGs and undirected graphs. Amiri and Wargalla interpolated between these two extreme values of c, to obtain an algorithm for (k,c)-SPC on DAGs that runs in polynomial time when k-c is constant. In the same way, we prove that (k,c)-SPC can be solved in polynomial time on undirected graphs whenever k-c is constant. For directed graphs, because of our counterexample to the original theorem statement, our roundtrip local-to-global result does not imply such an algorithm (k,c)-SPC. Even without an algorithmic application, our proof for directed graphs may be of broader interest because it characterizes intriguing structural properties of shortest paths in directed graphs.

Cite as

Shyan Akmal and Nicole Wein. A Local-To-Global Theorem for Congested Shortest Paths. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ESA.2023.8,
  author =	{Akmal, Shyan and Wein, Nicole},
  title =	{{A Local-To-Global Theorem for Congested Shortest Paths}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.8},
  URN =		{urn:nbn:de:0030-drops-186618},
  doi =		{10.4230/LIPIcs.ESA.2023.8},
  annote =	{Keywords: disjoint paths, shortest paths, congestion, parameterized complexity}
}
Document
Axis-Parallel Right Angle Crossing Graphs

Authors: Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann, Maximilian Pfister, and Torsten Ueckerdt


Abstract
A RAC graph is one admitting a RAC drawing, that is, a polyline drawing in which each crossing occurs at a right angle. Originally motivated by psychological studies on readability of graph layouts, RAC graphs form one of the most prominent graph classes in beyond planarity. In this work, we study a subclass of RAC graphs, called axis-parallel RAC (or apRAC, for short), that restricts the crossings to pairs of axis-parallel edge-segments. apRAC drawings combine the readability of planar drawings with the clarity of (non-planar) orthogonal drawings. We consider these graphs both with and without bends. Our contribution is as follows: (i) We study inclusion relationships between apRAC and traditional RAC graphs. (ii) We establish bounds on the edge density of apRAC graphs. (iii) We show that every graph with maximum degree 8 is 2-bend apRAC and give a linear time drawing algorithm. Some of our results on apRAC graphs also improve the state of the art for general RAC graphs. We conclude our work with a list of open questions and a discussion of a natural generalization of the apRAC model.

Cite as

Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann, Maximilian Pfister, and Torsten Ueckerdt. Axis-Parallel Right Angle Crossing Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{angelini_et_al:LIPIcs.ESA.2023.9,
  author =	{Angelini, Patrizio and Bekos, Michael A. and Katheder, Julia and Kaufmann, Michael and Pfister, Maximilian and Ueckerdt, Torsten},
  title =	{{Axis-Parallel Right Angle Crossing Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.9},
  URN =		{urn:nbn:de:0030-drops-186623},
  doi =		{10.4230/LIPIcs.ESA.2023.9},
  annote =	{Keywords: Graph drawing, RAC graphs, Graph drawing algorithms}
}
Document
(No) Quantum Space-Time Tradeoff for USTCON

Authors: Simon Apers, Stacey Jeffery, Galina Pass, and Michael Walter


Abstract
Undirected st-connectivity is important both for its applications in network problems, and for its theoretical connections with logspace complexity. Classically, a long line of work led to a time-space tradeoff of T = Õ(n²/S) for any S such that S = Ω(log(n)) and S = O(n²/m). Surprisingly, we show that quantumly there is no nontrivial time-space tradeoff: there is a quantum algorithm that achieves both optimal time Õ(n) and space O(log(n)) simultaneously. This improves on previous results, which required either O(log(n)) space and Õ(n^{1.5}) time, or Õ(n) space and time. To complement this, we show that there is a nontrivial time-space tradeoff when given a lower bound on the spectral gap of a corresponding random walk.

Cite as

Simon Apers, Stacey Jeffery, Galina Pass, and Michael Walter. (No) Quantum Space-Time Tradeoff for USTCON. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{apers_et_al:LIPIcs.ESA.2023.10,
  author =	{Apers, Simon and Jeffery, Stacey and Pass, Galina and Walter, Michael},
  title =	{{(No) Quantum Space-Time Tradeoff for USTCON}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{10:1--10:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.10},
  URN =		{urn:nbn:de:0030-drops-186636},
  doi =		{10.4230/LIPIcs.ESA.2023.10},
  annote =	{Keywords: Undirected st-connectivity, quantum walks, time-space tradeoff}
}
Document
Exploration of Graphs with Excluded Minors

Authors: Júlia Baligács, Yann Disser, Irene Heinrich, and Pascal Schweitzer


Abstract
We study the online graph exploration problem proposed by Kalyanasundaram and Pruhs (1994) and prove a constant competitive ratio on minor-free graphs. This result encompasses and significantly extends the graph classes that were previously known to admit a constant competitive ratio. The main ingredient of our proof is that we find a connection between the performance of the particular exploration algorithm Blocking and the existence of light spanners. Conversely, we exploit this connection to construct light spanners of bounded genus graphs. In particular, we achieve a lightness that improves on the best known upper bound for genus g ≥ 1 and recovers the known tight bound for the planar case (g = 0).

Cite as

Júlia Baligács, Yann Disser, Irene Heinrich, and Pascal Schweitzer. Exploration of Graphs with Excluded Minors. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{baligacs_et_al:LIPIcs.ESA.2023.11,
  author =	{Balig\'{a}cs, J\'{u}lia and Disser, Yann and Heinrich, Irene and Schweitzer, Pascal},
  title =	{{Exploration of Graphs with Excluded Minors}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.11},
  URN =		{urn:nbn:de:0030-drops-186644},
  doi =		{10.4230/LIPIcs.ESA.2023.11},
  annote =	{Keywords: online algorithms, competitive analysis, graph exploration, graph spanners, minor-free graphs, bounded genus graphs}
}
Document
Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else

Authors: Evripidis Bampis, Bruno Escoffier, Themis Gouleakis, Niklas Hahn, Kostas Lakis, Golnoosh Shahkarami, and Michalis Xefteris


Abstract
We study the Online Traveling Salesperson Problem (OLTSP) with predictions. In OLTSP, a sequence of initially unknown requests arrive over time at points (locations) of a metric space. The goal is, starting from a particular point of the metric space (the origin), to serve all these requests while minimizing the total time spent. The server moves with unit speed or is "waiting" (zero speed) at some location. We consider two variants: in the open variant, the goal is achieved when the last request is served. In the closed one, the server additionally has to return to the origin. We adopt a prediction model, introduced for OLTSP on the line [Gouleakis et al., 2023], in which the predictions correspond to the locations of the requests and extend it to more general metric spaces. We first propose an oracle-based algorithmic framework, inspired by previous work [Bampis et al., 2023]. This framework allows us to design online algorithms for general metric spaces that provide competitive ratio guarantees which, given perfect predictions, beat the best possible classical guarantee (consistency). Moreover, they degrade gracefully along with the increase in error (smoothness), but always within a constant factor of the best known competitive ratio in the classical case (robustness). Having reduced the problem to designing suitable efficient oracles, we describe how to achieve this for general metric spaces as well as specific metric spaces (rings, trees and flowers), the resulting algorithms being tractable in the latter case. The consistency guarantees of our algorithms are tight in almost all cases, and their smoothness guarantees only suffer a linear dependency on the error, which we show is necessary. Finally, we provide robustness guarantees improving previous results.

Cite as

Evripidis Bampis, Bruno Escoffier, Themis Gouleakis, Niklas Hahn, Kostas Lakis, Golnoosh Shahkarami, and Michalis Xefteris. Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bampis_et_al:LIPIcs.ESA.2023.12,
  author =	{Bampis, Evripidis and Escoffier, Bruno and Gouleakis, Themis and Hahn, Niklas and Lakis, Kostas and Shahkarami, Golnoosh and Xefteris, Michalis},
  title =	{{Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.12},
  URN =		{urn:nbn:de:0030-drops-186659},
  doi =		{10.4230/LIPIcs.ESA.2023.12},
  annote =	{Keywords: TSP, Online algorithms, Learning-augmented algorithms, Algorithms with predictions, Competitive analysis}
}
Document
A Parameterized Algorithm for Vertex Connectivity Survivable Network Design Problem with Uniform Demands

Authors: Jørgen Bang-Jensen, Kristine Vitting Klinkby, Pranabendu Misra, and Saket Saurabh


Abstract
In the Vertex Connectivity Survivable Network Design (VC-SNDP) problem, the input is a graph G and a function d: V(G) × V(G) → ℕ that encodes the vertex-connectivity demands between pairs of vertices. The objective is to find the smallest subgraph H of G that satisfies all these demands. It is a well-studied NP-complete problem that generalizes several network design problems. We consider the case of uniform demands, where for every vertex pair (u,v) the connectivity demand d(u,v) is a fixed integer κ. It is an important problem with wide applications. We study this problem in the realm of Parameterized Complexity. In this setting, in addition to G and d we are given an integer 𝓁 as the parameter and the objective is to determine if we can remove at least 𝓁 edges from G without violating any connectivity constraints. This was posed as an open problem by Bang-Jansen et.al. [SODA 2018], who studied the edge-connectivity variant of the problem under the same settings. Using a powerful classification result of Lokshtanov et al. [ICALP 2018], Gutin et al. [JCSS 2019] recently showed that this problem admits a (non-uniform) FPT algorithm where the running time was unspecified. Further they also gave an (uniform) FPT algorithm for the case of κ = 2. In this paper we present a (uniform) FPT algorithm any κ that runs in time 2^{O(κ² 𝓁⁴ log 𝓁)}⋅ |V(G)|^O(1). Our algorithm is built upon new insights on vertex connectivity in graphs. Our main conceptual contribution is a novel graph decomposition called the Wheel decomposition. Informally, it is a partition of the edge set of a graph G, E(G) = X₁ ∪ X₂ … ∪ X_r, with the parts arranged in a cyclic order, such that each vertex v ∈ V(G) either has edges in at most two consecutive parts, or has edges in every part of this partition. The first kind of vertices can be thought of as the rim of the wheel, while the second kind form the hub. Additionally, the vertex cuts induced by these edge-sets in G have highly symmetric properties. Our main technical result, informally speaking, establishes that "nearly edge-minimal’’ κ-vertex connected graphs admit a wheel decomposition - a fact that can be exploited for designing algorithms. We believe that this decomposition is of independent interest and it could be a useful tool in resolving other open problems.

Cite as

Jørgen Bang-Jensen, Kristine Vitting Klinkby, Pranabendu Misra, and Saket Saurabh. A Parameterized Algorithm for Vertex Connectivity Survivable Network Design Problem with Uniform Demands. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 13:1-13:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bangjensen_et_al:LIPIcs.ESA.2023.13,
  author =	{Bang-Jensen, J{\o}rgen and Klinkby, Kristine Vitting and Misra, Pranabendu and Saurabh, Saket},
  title =	{{A Parameterized Algorithm for Vertex Connectivity Survivable Network Design Problem with Uniform Demands}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{13:1--13:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.13},
  URN =		{urn:nbn:de:0030-drops-186663},
  doi =		{10.4230/LIPIcs.ESA.2023.13},
  annote =	{Keywords: Parameterized Complexity, Vertex Connectivity, Network Design}
}
Document
Lyndon Arrays in Sublinear Time

Authors: Hideo Bannai and Jonas Ellert


Abstract
A Lyndon word is a string that is lexicographically smaller than all of its non-trivial suffixes. For example, airbus is a Lyndon word, but amtrak is not a Lyndon word due to its suffix ak. The Lyndon array stores the length of the longest Lyndon prefix of each suffix of a string. For a length-n string over a general ordered alphabet, the array can be computed in O(n) time (Bille et al., ICALP 2020). However, on a word-RAM of word-width w ≥ log₂ n, linear time is not optimal if the string is over integer alphabet {0, … , σ} with σ ≪ n. In this case, the string can be stored in O(n log σ) bits (or O(n / log_σ n) words) of memory, and reading it takes only O(n / log_σ n) time. We show that O(n / log_σ n) time and words of space suffice to compute the succinct 2n-bit version of the Lyndon array. The time is optimal for w = O(log n). The algorithm uses precomputed lookup tables to perform significant parts of the computation in constant time. This is possible due to properties of periodic substrings, which we carefully analyze to achieve the desired result. We envision that the algorithm has applications in the computation of runs (maximal periodic substrings), where the Lyndon array plays a central role in both theoretically and practically fast algorithms.

Cite as

Hideo Bannai and Jonas Ellert. Lyndon Arrays in Sublinear Time. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bannai_et_al:LIPIcs.ESA.2023.14,
  author =	{Bannai, Hideo and Ellert, Jonas},
  title =	{{Lyndon Arrays in Sublinear Time}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.14},
  URN =		{urn:nbn:de:0030-drops-186670},
  doi =		{10.4230/LIPIcs.ESA.2023.14},
  annote =	{Keywords: Lyndon forest, Lyndon table, Lyndon array, sublinear time algorithms, word RAM algorithms, word packing, tabulation, lookup tables, periodicity}
}
Document
Sorting Finite Automata via Partition Refinement

Authors: Ruben Becker, Manuel Cáceres, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Francisco Olivares, and Nicola Prezza


Abstract
Wheeler nondeterministic finite automata (WNFAs) were introduced in (Gagie et al., TCS 2017) as a powerful generalization of prefix sorting from strings to labeled graphs. WNFAs admit optimal solutions to classic hard problems on labeled graphs and languages such as compression and regular expression matching. The problem of deciding whether a given NFA is Wheeler is known to be NP-complete (Gibney and Thankachan, ESA 2019). Recently, however, Alanko et al. (Information and Computation 2021) showed how to side-step this complexity by switching to preorders: letting Q be the set of states and δ the set of transitions, they provided a O(|δ|⋅|Q|²)-time algorithm computing a totally-ordered partition (i.e. equivalence relation) of the WNFA’s states such that (1) equivalent states recognize the same regular language, and (2) the order of (the classes of) non-equivalent states is consistent with any Wheeler order, when one exists. As a result, the output is a preorder of the states as useful for pattern matching as standard Wheeler orders. Further extensions of this line of work (Cotumaccio et al., SODA 2021 and DCC 2022) generalized these concepts to arbitrary NFAs by introducing co-lex partial preorders: in general, any NFA admits a partial preorder of its states reflecting the co-lexicographic order of their accepted strings; the smaller the width of such preorder is, the faster regular expression matching queries can be performed. To date, the fastest algorithm for computing the smallest-width partial preorder on NFAs runs in O(|δ|² + |Q|^{5/2}) time (Cotumaccio, DCC 2022), while on DFAs the same task can be accomplished in O(min(|Q|²log|Q|, |δ|⋅|Q|)) time (Kim et al., CPM 2023). In this paper, we provide much more efficient solutions to the co-lex order computation problem. Our results are achieved by extending a classic algorithm for the relational coarsest partition refinement problem of Paige and Tarjan to work with ordered partitions. More specifically, we provide a O(|δ|log|Q|)-time algorithm computing a co-lex total preorder when the input is a Wheeler NFA, and an algorithm with the same time complexity computing the smallest-width co-lex partial order of any DFA. In addition, we present implementations of our algorithms and show that they are very efficient also in practice.

Cite as

Ruben Becker, Manuel Cáceres, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Francisco Olivares, and Nicola Prezza. Sorting Finite Automata via Partition Refinement. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ESA.2023.15,
  author =	{Becker, Ruben and C\'{a}ceres, Manuel and Cenzato, Davide and Kim, Sung-Hwan and Kodric, Bojana and Olivares, Francisco and Prezza, Nicola},
  title =	{{Sorting Finite Automata via Partition Refinement}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.15},
  URN =		{urn:nbn:de:0030-drops-186684},
  doi =		{10.4230/LIPIcs.ESA.2023.15},
  annote =	{Keywords: Wheeler automata, prefix sorting, pattern matching, graph compression, sorting, partition refinement}
}
Document
Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication

Authors: Matthias Bentert, Klaus Heeger, and Tomohiro Koana


Abstract
We study the computational complexity of several polynomial-time-solvable graph problems parameterized by vertex integrity, a measure of a graph’s vulnerability to vertex removal in terms of connectivity. Vertex integrity is the smallest number ι such that there is a set S of ι' ≤ ι vertices such that every connected component of G-S contains at most ι-ι' vertices. It is known that the vertex integrity lies between the well-studied parameters vertex cover number and tree-depth. Our work follows similar studies for vertex cover number [Alon and Yuster, ESA 2007] and tree-depth [Iwata, Ogasawara, and Ohsaka, STACS 2018]. Alon and Yuster designed algorithms for graphs with small vertex cover number using fast matrix multiplications. We demonstrate that fast matrix multiplication can also be effectively used when parameterizing by vertex integrity ι by developing efficient algorithms for problems including an O(ι^{ω-1}n)-time algorithm for Maximum Matching and an O(ι^{(ω-1)/2}n²) ⊆ O(ι^{0.687} n²)-time algorithm for All-Pairs Shortest Paths. These algorithms can be faster than previous algorithms parameterized by tree-depth, for which fast matrix multiplication is not known to be effective.

Cite as

Matthias Bentert, Klaus Heeger, and Tomohiro Koana. Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ESA.2023.16,
  author =	{Bentert, Matthias and Heeger, Klaus and Koana, Tomohiro},
  title =	{{Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{16:1--16:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.16},
  URN =		{urn:nbn:de:0030-drops-186692},
  doi =		{10.4230/LIPIcs.ESA.2023.16},
  annote =	{Keywords: FPT in P, Algebraic Algorithms, Adaptive Algorithms, Subgraph Detection, Matching, APSP}
}
Document
Approximating Min-Diameter: Standard and Bichromatic

Authors: Aaron Berger, Jenny Kaufmann, and Virginia Vassilevska Williams


Abstract
The min-diameter of a directed graph G is a measure of the largest distance between nodes. It is equal to the maximum min-distance d_{min}(u,v) across all pairs u,v ∈ V(G), where d_{min}(u,v) = min(d(u,v), d(v,u)). Min-diameter approximation in directed graphs has attracted attention recently as an offshoot of the classical and well-studied diameter approximation problem. Our work provides a 3/2-approximation algorithm for min-diameter in DAGs running in time O(m^{1.426} n^{0.288}), and a faster almost-3/2-approximation variant which runs in time O(m^{0.713} n). (An almost-α-approximation algorithm determines the min-diameter to within a multiplicative factor of α plus constant additive error.) This is the first known algorithm to solve 3/2-approximation for min-diameter in sparse DAGs in truly subquadratic time O(m^{2-ε}) for ε > 0; previously only a 2-approximation was known. By a conditional lower bound result of [Abboud et al, SODA 2016], a better than 3/2-approximation can't be achieved in truly subquadratic time under the Strong Exponential Time Hypothesis (SETH), so our result is conditionally tight. We additionally obtain a new conditional lower bound for min-diameter approximation in general directed graphs, showing that under SETH, one cannot achieve an approximation factor below 2 in truly subquadratic time. Our work also presents the first study of approximating bichromatic min-diameter, which is the maximum min-distance between oppositely colored vertices in a 2-colored graph. We show that SETH implies that in DAGs, a better than 2 approximation cannot be achieved in truly subquadratic time, and that in general graphs, an approximation within a factor below 5/2 is similarly out of reach. We then obtain an O(m)-time algorithm which determines if bichromatic min-diameter is finite, and an almost-2-approximation algorithm for bichromatic min-diameter with runtime Õ(min(m^{4/3} n^{1/3}, m^{1/2} n^{3/2})).

Cite as

Aaron Berger, Jenny Kaufmann, and Virginia Vassilevska Williams. Approximating Min-Diameter: Standard and Bichromatic. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{berger_et_al:LIPIcs.ESA.2023.17,
  author =	{Berger, Aaron and Kaufmann, Jenny and Vassilevska Williams, Virginia},
  title =	{{Approximating Min-Diameter: Standard and Bichromatic}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.17},
  URN =		{urn:nbn:de:0030-drops-186705},
  doi =		{10.4230/LIPIcs.ESA.2023.17},
  annote =	{Keywords: diameter, min distances, fine-grained, approximation algorithm}
}
Document
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

Authors: Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen


Abstract
Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, - Independent Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using 𝒪(dk²log n) space; - Max Cut can be solved in time n^𝒪(dk) using 𝒪(dk log n) space; and - Dominating Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using n^𝒪(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial.

Cite as

Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen. Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.ESA.2023.18,
  author =	{Bergougnoux, Benjamin and Chekan, Vera and Ganian, Robert and Kant\'{e}, Mamadou Moustapha and Mnich, Matthias and Oum, Sang-il and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan},
  title =	{{Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.18},
  URN =		{urn:nbn:de:0030-drops-186710},
  doi =		{10.4230/LIPIcs.ESA.2023.18},
  annote =	{Keywords: Parameterized complexity, shrubdepth, space complexity, algebraic methods}
}
Document
High Performance Construction of RecSplit Based Minimal Perfect Hash Functions

Authors: Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders


Abstract
A minimal perfect hash function (MPHF) bijectively maps a set S of objects to the first |S| integers. It can be used as a building block in databases and data compression. RecSplit [Esposito et al., ALENEX'20] is currently the most space efficient practical minimal perfect hash function. It heavily relies on trying out hash functions in a brute force way. We introduce rotation fitting, a new technique that makes the search more efficient by drastically reducing the number of tried hash functions. Additionally, we greatly improve the construction time of RecSplit by harnessing parallelism on the level of bits, vectors, cores, and GPUs. In combination, the resulting improvements yield speedups up to 239 on an 8-core CPU and up to 5438 using a GPU. The original single-threaded RecSplit implementation needs 1.5 hours to construct an MPHF for 5 Million objects with 1.56 bits per object. On the GPU, we achieve the same space usage in just 5 seconds. Given that the speedups are larger than the increase in energy consumption, our implementation is more energy efficient than the original implementation.

Cite as

Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. High Performance Construction of RecSplit Based Minimal Perfect Hash Functions. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bez_et_al:LIPIcs.ESA.2023.19,
  author =	{Bez, Dominik and Kurpicz, Florian and Lehmann, Hans-Peter and Sanders, Peter},
  title =	{{High Performance Construction of RecSplit Based Minimal Perfect Hash Functions}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.19},
  URN =		{urn:nbn:de:0030-drops-186728},
  doi =		{10.4230/LIPIcs.ESA.2023.19},
  annote =	{Keywords: compressed data structure, parallel perfect hashing, bit parallelism, GPU, SIMD, parallel computing, vector instructions}
}
Document
On the Giant Component of Geometric Inhomogeneous Random Graphs

Authors: Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Janosch Ruff, and Ziena Zeif


Abstract
In this paper we study the threshold model of geometric inhomogeneous random graphs (GIRGs); a generative random graph model that is closely related to hyperbolic random graphs (HRGs). These models have been observed to capture complex real-world networks well with respect to the structural and algorithmic properties. Following comprehensive studies regarding their connectivity, i.e., which parts of the graphs are connected, we have a good understanding under which circumstances a giant component (containing a constant fraction of the graph) emerges. While previous results are rather technical and challenging to work with, the goal of this paper is to provide more accessible proofs. At the same time we significantly improve the previously known probabilistic guarantees, showing that GIRGs contain a giant component with probability 1 - exp(-Ω(n^{(3-τ)/2})) for graph size n and a degree distribution with power-law exponent τ ∈ (2, 3). Based on that we additionally derive insights about the connectivity of certain induced subgraphs of GIRGs.

Cite as

Thomas Bläsius, Tobias Friedrich, Maximilian Katzmann, Janosch Ruff, and Ziena Zeif. On the Giant Component of Geometric Inhomogeneous Random Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2023.20,
  author =	{Bl\"{a}sius, Thomas and Friedrich, Tobias and Katzmann, Maximilian and Ruff, Janosch and Zeif, Ziena},
  title =	{{On the Giant Component of Geometric Inhomogeneous Random Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.20},
  URN =		{urn:nbn:de:0030-drops-186737},
  doi =		{10.4230/LIPIcs.ESA.2023.20},
  annote =	{Keywords: geometric inhomogeneous random graphs, connectivity, giant component}
}
Document
An Efficient Algorithm for Power Dominating Set

Authors: Thomas Bläsius and Max Göttlicher


Abstract
The problem Power Dominating Set (PDS) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of PDS by proving it is W[P]-complete when parameterized with respect to the solution size. We note that it was only known to be W[2]-hard before. Our second and main contribution is a new algorithm for PDS that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for PDS that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for PDS by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.

Cite as

Thomas Bläsius and Max Göttlicher. An Efficient Algorithm for Power Dominating Set. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blasius_et_al:LIPIcs.ESA.2023.21,
  author =	{Bl\"{a}sius, Thomas and G\"{o}ttlicher, Max},
  title =	{{An Efficient Algorithm for Power Dominating Set}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.21},
  URN =		{urn:nbn:de:0030-drops-186743},
  doi =		{10.4230/LIPIcs.ESA.2023.21},
  annote =	{Keywords: Power Dominating Set, Implicit Hitting Set, Parameterized Complexity, Reduction Rules}
}
Document
Incremental (1-ε)-Approximate Dynamic Matching in O(poly(1/ε)) Update Time

Authors: Joakim Blikstad and Peter Kiss


Abstract
In the dynamic approximate maximum bipartite matching problem we are given bipartite graph G undergoing updates and our goal is to maintain a matching of G which is large compared the maximum matching size μ(G). We define a dynamic matching algorithm to be α (respectively (α, β))-approximate if it maintains matching M such that at all times |M | ≥ μ(G) ⋅ α (respectively |M| ≥ μ(G) ⋅ α - β). We present the first deterministic (1-ε)-approximate dynamic matching algorithm with O(poly(ε^{-1})) amortized update time for graphs undergoing edge insertions. Previous solutions either required super-constant [Gupta FSTTCS'14, Bhattacharya-Kiss-Saranurak SODA'23] or exponential in 1/ε [Grandoni-Leonardi-Sankowski-Schwiegelshohn-Solomon SODA'19] update time. Our implementation is arguably simpler than the mentioned algorithms and its description is self contained. Moreover, we show that if we allow for additive (1, ε⋅n)-approximation our algorithm seamlessly extends to also handle vertex deletions, on top of edge insertions. This makes our algorithm one of the few small update time algorithms for (1-ε)-approximate dynamic matching allowing for updates both increasing and decreasing the maximum matching size of G in a fully dynamic manner. Our algorithm relies on the weighted variant of the celebrated Edge-Degree-Constrained-Subgraph (EDCS) datastructure introduced by [Bernstein-Stein ICALP'15]. As far as we are aware we introduce the first application of the weighted-EDCS for arbitrarily dense graphs. We also present a significantly simplified proof for the approximation ratio of weighed-EDCS as a matching sparsifier compared to [Bernstein-Stein], as well as simple descriptions of a fractional matching and fractional vertex cover defined on top of the EDCS. Considering the wide range of applications EDCS has found in settings such as streaming, sub-linear, stochastic and more we hope our techniques will be of independent research interest outside of the dynamic setting.

Cite as

Joakim Blikstad and Peter Kiss. Incremental (1-ε)-Approximate Dynamic Matching in O(poly(1/ε)) Update Time. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{blikstad_et_al:LIPIcs.ESA.2023.22,
  author =	{Blikstad, Joakim and Kiss, Peter},
  title =	{{Incremental (1-\epsilon)-Approximate Dynamic Matching in O(poly(1/\epsilon)) Update Time}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.22},
  URN =		{urn:nbn:de:0030-drops-186756},
  doi =		{10.4230/LIPIcs.ESA.2023.22},
  annote =	{Keywords: Bipartite Matching, Incremental Matching, Dynamic Algorithms, Approximation Algorithms, EDCS}
}
Document
Maximum Independent Set When Excluding an Induced Minor: K₁ + tK₂ and tC₃ ⊎ C₄

Authors: Édouard Bonnet, Julien Duron, Colin Geniet, Stéphan Thomassé, and Alexandra Wesolek


Abstract
Dallard, Milanič, and Štorgel [arXiv '22] ask if, for every class excluding a fixed planar graph H as an induced minor, Maximum Independent Set can be solved in polynomial time, and show that this is indeed the case when H is any planar complete bipartite graph, or the 5-vertex clique minus one edge, or minus two disjoint edges. A positive answer would constitute a far-reaching generalization of the state-of-the-art, when we currently do not know if a polynomial-time algorithm exists when H is the 7-vertex path. Relaxing tractability to the existence of a quasipolynomial-time algorithm, we know substantially more. Indeed, quasipolynomial-time algorithms were recently obtained for the t-vertex cycle, C_t [Gartland et al., STOC '21], and the disjoint union of t triangles, tC₃ [Bonamy et al., SODA '23]. We give, for every integer t, a polynomial-time algorithm running in n^O(t⁵) when H is the friendship graph K₁ + tK₂ (t disjoint edges plus a vertex fully adjacent to them), and a quasipolynomial-time algorithm running in n^{O(t² log n) + f(t)}, with f a single-exponential function, when H is tC₃ ⊎ C₄ (the disjoint union of t triangles and a 4-vertex cycle). The former generalizes the algorithm readily obtained from Alekseev’s structural result on graphs excluding tK₂ as an induced subgraph [Alekseev, DAM '07], while the latter extends Bonamy et al.’s result.

Cite as

Édouard Bonnet, Julien Duron, Colin Geniet, Stéphan Thomassé, and Alexandra Wesolek. Maximum Independent Set When Excluding an Induced Minor: K₁ + tK₂ and tC₃ ⊎ C₄. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ESA.2023.23,
  author =	{Bonnet, \'{E}douard and Duron, Julien and Geniet, Colin and Thomass\'{e}, St\'{e}phan and Wesolek, Alexandra},
  title =	{{Maximum Independent Set When Excluding an Induced Minor: K₁ + tK₂ and tC₃ ⊎ C₄}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.23},
  URN =		{urn:nbn:de:0030-drops-186769},
  doi =		{10.4230/LIPIcs.ESA.2023.23},
  annote =	{Keywords: Maximum Independent Set, forbidden induced minors, quasipolynomial-time algorithms}
}
Document
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis


Abstract
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Cite as

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24},
  URN =		{urn:nbn:de:0030-drops-186776},
  doi =		{10.4230/LIPIcs.ESA.2023.24},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Funnelselect: Cache-Oblivious Multiple Selection

Authors: Gerth Stølting Brodal and Sebastian Wild


Abstract
We present the algorithm funnelselect, the first optimal randomized cache-oblivious algorithm for the multiple-selection problem. The algorithm takes as input an unsorted array of N elements and q query ranks r_1 < ⋯ < r_q, and returns in sorted order the q input elements of rank r_1, …, r_q, respectively. The algorithm uses expected and with high probability O(∑_{i = 1}^{q+1} Δ_i/B ⋅ log_{M/B} N/(Δ_i) + N/B) I/Os, where B is the external memory block size, M ≥ B^{1+ε} is the internal memory size, for some constant ε > 0, and Δ_i = r_i - r_{i-1} (assuming r_0 = 0 and r_{q+1} = N + 1). This is the best possible I/O bound in the cache-oblivious and external memory models. The result is achieved by reversing the computation of the cache-oblivious sorting algorithm funnelsort by Frigo, Leiserson, Prokop and Ramachandran [FOCS 1999], using randomly selected pivots for distributing elements, and pruning computations that with high probability are not expected to contain any query ranks.

Cite as

Gerth Stølting Brodal and Sebastian Wild. Funnelselect: Cache-Oblivious Multiple Selection. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2023.25,
  author =	{Brodal, Gerth St{\o}lting and Wild, Sebastian},
  title =	{{Funnelselect: Cache-Oblivious Multiple Selection}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.25},
  URN =		{urn:nbn:de:0030-drops-186789},
  doi =		{10.4230/LIPIcs.ESA.2023.25},
  annote =	{Keywords: Multiple selection, cache-oblivious algorithm, randomized algorithm, entropy bounds}
}
Document
Oriented Spanners

Authors: Kevin Buchin, Joachim Gudmundsson, Antonia Kalb, Aleksandr Popov, Carolin Rehs, André van Renssen, and Sampson Wong


Abstract
Given a point set P in the Euclidean plane and a parameter t, we define an oriented t-spanner as an oriented subgraph of the complete bi-directed graph such that for every pair of points, the shortest cycle in G through those points is at most a factor t longer than the shortest oriented cycle in the complete bi-directed graph. We investigate the problem of computing sparse graphs with small oriented dilation. As we can show that minimising oriented dilation for a given number of edges is NP-hard in the plane, we first consider one-dimensional point sets. While obtaining a 1-spanner in this setting is straightforward, already for five points such a spanner has no plane embedding with the leftmost and rightmost point on the outer face. This leads to restricting to oriented graphs with a one-page book embedding on the one-dimensional point set. For this case we present a dynamic program to compute the graph of minimum oriented dilation that runs in 𝒪(n⁸) time for n points, and a greedy algorithm that computes a 5-spanner in 𝒪(nlog n) time. Expanding these results finally gives us a result for two-dimensional point sets: we prove that for convex point sets the greedy triangulation results in an oriented 𝒪(1)-spanner.

Cite as

Kevin Buchin, Joachim Gudmundsson, Antonia Kalb, Aleksandr Popov, Carolin Rehs, André van Renssen, and Sampson Wong. Oriented Spanners. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 26:1-26:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ESA.2023.26,
  author =	{Buchin, Kevin and Gudmundsson, Joachim and Kalb, Antonia and Popov, Aleksandr and Rehs, Carolin and van Renssen, Andr\'{e} and Wong, Sampson},
  title =	{{Oriented Spanners}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{26:1--26:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.26},
  URN =		{urn:nbn:de:0030-drops-186796},
  doi =		{10.4230/LIPIcs.ESA.2023.26},
  annote =	{Keywords: computational geometry, spanner, oriented graph, greedy triangulation}
}
Document
Online Coalition Formation Under Random Arrival or Coalition Dissolution

Authors: Martin Bullinger and René Romen


Abstract
Coalition formation considers the question of how to partition a set of n agents into disjoint coalitions according to their preferences. We consider a cardinal utility model with additively separable aggregation of preferences and study the online variant of coalition formation, where the agents arrive in sequence and whenever an agent arrives, they have to be assigned to a coalition immediately. The goal is to maximize social welfare. In a purely deterministic model, the greedy algorithm, where an agent is assigned to the coalition with the largest gain, is known to achieve an optimal competitive ratio, which heavily relies on the range of utilities. We complement this result by considering two related models. First, we study a model where agents arrive in a random order. We find that the competitive ratio of the greedy algorithm is Θ(1/(n²)), whereas an alternative algorithm, which is based on alternating between waiting and greedy phases, can achieve a competitive ratio of Θ(1/n). Second, we relax the irrevocability of decisions by allowing to dissolve coalitions into singleton coalitions, presenting a matching-based algorithm that once again achieves a competitive ratio of Θ(1/n). Hence, compared to the base model, we present two ways to achieve a competitive ratio that precisely gets rid of utility dependencies. Our results also give novel insights in weighted online matching.

Cite as

Martin Bullinger and René Romen. Online Coalition Formation Under Random Arrival or Coalition Dissolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bullinger_et_al:LIPIcs.ESA.2023.27,
  author =	{Bullinger, Martin and Romen, Ren\'{e}},
  title =	{{Online Coalition Formation Under Random Arrival or Coalition Dissolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{27:1--27:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.27},
  URN =		{urn:nbn:de:0030-drops-186809},
  doi =		{10.4230/LIPIcs.ESA.2023.27},
  annote =	{Keywords: Online Algorithms, Coalition Formation, Online Matching}
}
Document
On k-Means for Segments and Polylines

Authors: Sergio Cabello and Panos Giannopoulos


Abstract
We study the problem of k-means clustering in the space of straight-line segments in ℝ² under the Hausdorff distance. For this problem, we give a (1+ε)-approximation algorithm that, for an input of n segments, for any fixed k, and with constant success probability, runs in time O(n + ε^{-O(k)} + ε^{-O(k)} ⋅ log^O(k) (ε^{-1})). The algorithm has two main ingredients. Firstly, we express the k-means objective in our metric space as a sum of algebraic functions and use the optimization technique of Vigneron [Antoine Vigneron, 2014] to approximate its minimum. Secondly, we reduce the input size by computing a small size coreset using the sensitivity-based sampling framework by Feldman and Langberg [Dan Feldman and Michael Langberg, 2011; Feldman et al., 2020]. Our results can be extended to polylines of constant complexity with a running time of O(n + ε^{-O(k)}).

Cite as

Sergio Cabello and Panos Giannopoulos. On k-Means for Segments and Polylines. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 28:1-28:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cabello_et_al:LIPIcs.ESA.2023.28,
  author =	{Cabello, Sergio and Giannopoulos, Panos},
  title =	{{On k-Means for Segments and Polylines}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{28:1--28:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.28},
  URN =		{urn:nbn:de:0030-drops-186812},
  doi =		{10.4230/LIPIcs.ESA.2023.28},
  annote =	{Keywords: k-means clustering, segments, polylines, Hausdorff distance, Fr\'{e}chet mean}
}
Document
Effective Resistances in Non-Expander Graphs

Authors: Dongrun Cai, Xue Chen, and Pan Peng


Abstract
Effective resistances are ubiquitous in graph algorithms and network analysis. For an undirected graph G, its effective resistance R_G(s,t) between two vertices s and t is defined as the equivalent resistance between s and t if G is thought of as an electrical network with unit resistance on each edge. If we use L_G to denote the Laplacian matrix of G and L_G^† to denote its pseudo-inverse, we have R_G(s,t) = (𝟏_s-𝟏_t)^⊤ L^† (𝟏_s-𝟏_t) such that classical Laplacian solvers [Daniel A. Spielman and Shang{-}Hua Teng, 2014] provide almost-linear time algorithms to approximate R_G(s,t). In this work, we study sublinear time algorithms to approximate the effective resistance of an adjacent pair s and t. We consider the classical adjacency list model [Ron, 2019] for local algorithms. While recent works [Andoni et al., 2018; Peng et al., 2021; Li and Sachdeva, 2023] have provided sublinear time algorithms for expander graphs, we prove several lower bounds for general graphs of n vertices and m edges: 1) It needs Ω(n) queries to obtain 1.01-approximations of the effective resistance of an adjacent pair s and t, even for graphs of degree at most 3 except s and t. 2) For graphs of degree at most d and any parameter 𝓁, it needs Ω(m/𝓁) queries to obtain c ⋅ min{d,𝓁}-approximations where c > 0 is a universal constant. Moreover, we supplement the first lower bound by providing a sublinear time (1+ε)-approximation algorithm for graphs of degree 2 except the pair s and t. One of our technical ingredients is to bound the expansion of a graph in terms of the smallest non-trivial eigenvalue of its Laplacian matrix after removing edges. We discover a new lower bound on the eigenvalues of perturbed graphs (resp. perturbed matrices) by incorporating the effective resistance of the removed edge (resp. the leverage scores of the removed rows), which may be of independent interest.

Cite as

Dongrun Cai, Xue Chen, and Pan Peng. Effective Resistances in Non-Expander Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ESA.2023.29,
  author =	{Cai, Dongrun and Chen, Xue and Peng, Pan},
  title =	{{Effective Resistances in Non-Expander Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.29},
  URN =		{urn:nbn:de:0030-drops-186823},
  doi =		{10.4230/LIPIcs.ESA.2023.29},
  annote =	{Keywords: Sublinear Time Algorithm, Effective Resistance, Leverage Scores, Matrix Perturbation}
}
Document
New Menger-Like Dualities in Digraphs and Applications to Half-Integral Linkages

Authors: Victor Campos, Jonas Costa, Raul Lopes, and Ignasi Sau


Abstract
We present new min-max relations in digraphs between the number of paths satisfying certain conditions and the order of the corresponding cuts. We define these objects in order to capture, in the context of solving the half-integral linkage problem, the essential properties needed for reaching a large bramble of congestion two (or any other constant) from the terminal set. This strategy has been used ad-hoc in several articles, usually with lengthy technical proofs, and our objective is to abstract it to make it applicable in a simpler and unified way. We provide two proofs of the min-max relations, one consisting in applying Menger’s Theorem on appropriately defined auxiliary digraphs, and an alternative simpler one using matroids, however with worse polynomial running time. As an application, we manage to simplify and improve several results of Edwards et al. [ESA 2017] and of Giannopoulou et al. [SODA 2022] about finding half-integral linkages in digraphs. Concerning the former, besides being simpler, our proof provides an almost optimal bound on the strong connectivity of a digraph for it to be half-integrally feasible under the presence of a large bramble of congestion two (or equivalently, if the directed tree-width is large, which is the hard case). Concerning the latter, our proof uses brambles as rerouting objects instead of cylindrical grids, hence yielding much better bounds and being somehow independent of a particular topology. We hope that our min-max relations will find further applications as, in our opinion, they are simple, robust, and versatile to be easily applicable to different types of routing problems in digraphs.

Cite as

Victor Campos, Jonas Costa, Raul Lopes, and Ignasi Sau. New Menger-Like Dualities in Digraphs and Applications to Half-Integral Linkages. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 30:1-30:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{campos_et_al:LIPIcs.ESA.2023.30,
  author =	{Campos, Victor and Costa, Jonas and Lopes, Raul and Sau, Ignasi},
  title =	{{New Menger-Like Dualities in Digraphs and Applications to Half-Integral Linkages}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{30:1--30:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.30},
  URN =		{urn:nbn:de:0030-drops-186838},
  doi =		{10.4230/LIPIcs.ESA.2023.30},
  annote =	{Keywords: directed graphs, min-max relation, half-integral linkage, directed disjoint paths, bramble, parameterized complexity, matroids}
}
Document
Enumerating Maximal Induced Subgraphs

Authors: Yixin Cao


Abstract
Given a graph G, the maximal induced subgraphs problem asks to enumerate all maximal induced subgraphs of G that belong to a certain hereditary graph class. While its optimization version, known as the minimum vertex deletion problem in literature, has been intensively studied, enumeration algorithms were only known for a few simple graph classes, e.g., independent sets, cliques, and forests, until very recently [Conte and Uno, STOC 2019]. There is also a connected variation of this problem, where one is concerned with only those induced subgraphs that are connected. We introduce two new approaches, which enable us to develop algorithms that solve both variations for a number of important graph classes. A general technique that has been proven very powerful in enumeration algorithms is to build a solution map, i.e., a multiple digraph on all the solutions of the problem, and the key of this approach is to make the solution map strongly connected, so that a simple traversal of the solution map solves the problem. First, we introduce retaliation-free paths to certify strong connectedness of the solution map we build. Second, generalizing the idea of Cohen, Kimelfeld, and Sagiv [JCSS 2008], we introduce an apparently very restricted version of the maximal (connected) induced subgraphs problem, and show that it is equivalent to the original problem in terms of solvability in incremental polynomial time. Moreover, we give reductions between the two variations, so that it suffices to solve one of the variations for each class we study. Our work also leads to direct and simpler proofs of several important known results.

Cite as

Yixin Cao. Enumerating Maximal Induced Subgraphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 31:1-31:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cao:LIPIcs.ESA.2023.31,
  author =	{Cao, Yixin},
  title =	{{Enumerating Maximal Induced Subgraphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{31:1--31:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.31},
  URN =		{urn:nbn:de:0030-drops-186841},
  doi =		{10.4230/LIPIcs.ESA.2023.31},
  annote =	{Keywords: enumeration algorithm, hereditary graph class, maximal induced subgraph}
}
Document
Approximation Algorithm for Norm Multiway Cut

Authors: Charlie Carlson, Jafar Jafarov, Konstantin Makarychev, Yury Makarychev, and Liren Shan


Abstract
We consider variants of the classic Multiway Cut problem. Multiway Cut asks to partition a graph G into k parts so as to separate k given terminals. Recently, Chandrasekaran and Wang (ESA 2021) introduced 𝓁_p-norm Multiway Cut, a generalization of the problem, in which the goal is to minimize the 𝓁_p norm of the edge boundaries of k parts. We provide an O(log^{1/2} nlog^{1/2+1/p} k) approximation algorithm for this problem, improving upon the approximation guarantee of O(log^{3/2} n log^{1/2} k) due to Chandrasekaran and Wang. We also introduce and study Norm Multiway Cut, a further generalization of Multiway Cut. We assume that we are given access to an oracle, which answers certain queries about the norm. We present an O(log^{1/2} n log^{7/2} k) approximation algorithm with a weaker oracle and an O(log^{1/2} n log^{5/2} k) approximation algorithm with a stronger oracle. Additionally, we show that without any oracle access, there is no n^{1/4-ε} approximation algorithm for every ε > 0 assuming the Hypergraph Dense-vs-Random Conjecture.

Cite as

Charlie Carlson, Jafar Jafarov, Konstantin Makarychev, Yury Makarychev, and Liren Shan. Approximation Algorithm for Norm Multiway Cut. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{carlson_et_al:LIPIcs.ESA.2023.32,
  author =	{Carlson, Charlie and Jafarov, Jafar and Makarychev, Konstantin and Makarychev, Yury and Shan, Liren},
  title =	{{Approximation Algorithm for Norm Multiway Cut}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.32},
  URN =		{urn:nbn:de:0030-drops-186854},
  doi =		{10.4230/LIPIcs.ESA.2023.32},
  annote =	{Keywords: Multiway cut, Approximation algorithms}
}
Document
Polynomial-Time Approximation of Independent Set Parameterized by Treewidth

Authors: Parinya Chalermsook, Fedor Fomin, Thekla Hamm, Tuukka Korhonen, Jesper Nederlof, and Ly Orgo


Abstract
We prove the following result about approximating the maximum independent set in a graph. Informally, we show that any approximation algorithm with a "non-trivial" approximation ratio (as a function of the number of vertices of the input graph G) can be turned into an approximation algorithm achieving almost the same ratio, albeit as a function of the treewidth of G. More formally, we prove that for any function f, the existence of a polynomial time (n/f(n))-approximation algorithm yields the existence of a polynomial time O(tw⋅log{f(tw)}/f(tw))-approximation algorithm, where n and tw denote the number of vertices and the width of a given tree decomposition of the input graph. By pipelining our result with the state-of-the-art O(n ⋅ (log log n)²/log³n)-approximation algorithm by Feige (2004), this implies an O(tw⋅(log log tw)³/log³tw)-approximation algorithm.

Cite as

Parinya Chalermsook, Fedor Fomin, Thekla Hamm, Tuukka Korhonen, Jesper Nederlof, and Ly Orgo. Polynomial-Time Approximation of Independent Set Parameterized by Treewidth. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 33:1-33:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chalermsook_et_al:LIPIcs.ESA.2023.33,
  author =	{Chalermsook, Parinya and Fomin, Fedor and Hamm, Thekla and Korhonen, Tuukka and Nederlof, Jesper and Orgo, Ly},
  title =	{{Polynomial-Time Approximation of Independent Set Parameterized by Treewidth}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{33:1--33:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.33},
  URN =		{urn:nbn:de:0030-drops-186865},
  doi =		{10.4230/LIPIcs.ESA.2023.33},
  annote =	{Keywords: Maximum Independent Set, Treewidth, Approximation Algorithms, Parameterized Approximation}
}
Document
Faster Local Motif Clustering via Maximum Flows

Authors: Adil Chhabra, Marcelo Fonseca Faraj, and Christian Schulz


Abstract
Local clustering aims to identify a cluster within a given graph that includes a designated seed node or a significant portion of a group of seed nodes. This cluster should be well-characterized, i.e., it has a high number of internal edges and a low number of external edges. In this work, we propose SOCIAL, a novel algorithm for local motif clustering which optimizes for motif conductance based on a local hypergraph model representation of the problem and an adapted version of the max-flow quotient-cut improvement algorithm (MQI). In our experiments with the triangle motif, SOCIAL produces local clusters with an average motif conductance 1.7% lower than the state-of-the-art, while being up to multiple orders of magnitude faster.

Cite as

Adil Chhabra, Marcelo Fonseca Faraj, and Christian Schulz. Faster Local Motif Clustering via Maximum Flows. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 34:1-34:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chhabra_et_al:LIPIcs.ESA.2023.34,
  author =	{Chhabra, Adil and Fonseca Faraj, Marcelo and Schulz, Christian},
  title =	{{Faster Local Motif Clustering via Maximum Flows}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{34:1--34:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.34},
  URN =		{urn:nbn:de:0030-drops-186871},
  doi =		{10.4230/LIPIcs.ESA.2023.34},
  annote =	{Keywords: local motif clustering, motif conductance, maximum flows, max-flow quotient-cut improvement}
}
Document
Primal-Dual Schemes for Online Matching in Bounded Degree Graphs

Authors: Ilan Reuven Cohen and Binghui Peng


Abstract
We explore various generalizations of the online matching problem in a bipartite graph G as the b-matching problem [Kalyanasundaram and Pruhs, 2000], the allocation problem [Buchbinder et al., 2007], and the AdWords problem [Mehta et al., 2007] in a beyond-worst-case setting. Specifically, we assume that G is a (k, d)-bounded degree graph, introduced by Naor and Wajc [Naor and Wajc, 2018]. Such graphs model natural properties on the degrees of advertisers and queries in the allocation and AdWords problems. While previous work only considers the scenario where k ≥ d, we consider the interesting intermediate regime of k ≤ d and prove a tight competitive ratio as a function of k,d (under the small-bid assumption) of τ(k,d) = 1 - (1-k/d)⋅(1-1/d)^{d - k} for the b-matching and allocation problems. We exploit primal-dual schemes [Buchbinder et al., 2009; Azar et al., 2017] to design and analyze the corresponding tight upper and lower bounds. Finally, we show a separation between the allocation and AdWords problems. We demonstrate that τ(k,d) competitiveness is impossible for the AdWords problem even in (k,d)-bounded degree graphs.

Cite as

Ilan Reuven Cohen and Binghui Peng. Primal-Dual Schemes for Online Matching in Bounded Degree Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ESA.2023.35,
  author =	{Cohen, Ilan Reuven and Peng, Binghui},
  title =	{{Primal-Dual Schemes for Online Matching in Bounded Degree Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.35},
  URN =		{urn:nbn:de:0030-drops-186884},
  doi =		{10.4230/LIPIcs.ESA.2023.35},
  annote =	{Keywords: Online Matching, Primal-dual analysis, bounded-degree graph, the AdWords problem}
}
Document
Robust and Space-Efficient Dual Adversary Quantum Query Algorithms

Authors: Michael Czekanski, Shelby Kimmel, and R. Teal Witter


Abstract
The general adversary dual is a powerful tool in quantum computing because it gives a query-optimal bounded-error quantum algorithm for deciding any Boolean function. Unfortunately, the algorithm uses linear qubits in the worst case, and only works if the constraints of the general adversary dual are exactly satisfied. The challenge of improving the algorithm is that it is brittle to arbitrarily small errors since it relies on a reflection over a span of vectors. We overcome this challenge and build a robust dual adversary algorithm that can handle approximately satisfied constraints. As one application of our robust algorithm, we prove that for any Boolean function with polynomially many 1-valued inputs (or in fact a slightly weaker condition) there is a query-optimal algorithm that uses logarithmic qubits. As another application, we prove that numerically derived, approximate solutions to the general adversary dual give a bounded-error quantum algorithm under certain conditions. Further, we show that these conditions empirically hold with reasonable iterations for Boolean functions with small domains. We also develop several tools that may be of independent interest, including a robust approximate spectral gap lemma, a method to compress a general adversary dual solution using the Johnson-Lindenstrauss lemma, and open-source code to find solutions to the general adversary dual.

Cite as

Michael Czekanski, Shelby Kimmel, and R. Teal Witter. Robust and Space-Efficient Dual Adversary Quantum Query Algorithms. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 36:1-36:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{czekanski_et_al:LIPIcs.ESA.2023.36,
  author =	{Czekanski, Michael and Kimmel, Shelby and Witter, R. Teal},
  title =	{{Robust and Space-Efficient Dual Adversary Quantum Query Algorithms}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{36:1--36:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.36},
  URN =		{urn:nbn:de:0030-drops-186890},
  doi =		{10.4230/LIPIcs.ESA.2023.36},
  annote =	{Keywords: Quantum Computing, Robust Quantum Algorithms, Johnson-Lindenstrauss Lemma, Span Programs, Query Complexity, Space Complexity}
}
Document
Revisiting the Random Subset Sum Problem

Authors: Arthur Carvalho Walraven Da Cunha, Francesco d'Amore, Frédéric Giroire, Hicham Lesfari, Emanuele Natale, and Laurent Viennot


Abstract
The average properties of the well-known Subset Sum Problem can be studied by means of its randomised version, where we are given a target value z, random variables X_1, …, X_n, and an error parameter ε > 0, and we seek a subset of the X_is whose sum approximates z up to error ε. In this setup, it has been shown that, under mild assumptions on the distribution of the random variables, a sample of size 𝒪(log(1/ε)) suffices to obtain, with high probability, approximations for all values in [-1/2, 1/2]. Recently, this result has been rediscovered outside the algorithms community, enabling meaningful progress in other fields. In this work, we present an alternative proof for this theorem, with a more direct approach and resourcing to more elementary tools.

Cite as

Arthur Carvalho Walraven Da Cunha, Francesco d'Amore, Frédéric Giroire, Hicham Lesfari, Emanuele Natale, and Laurent Viennot. Revisiting the Random Subset Sum Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 37:1-37:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dacunha_et_al:LIPIcs.ESA.2023.37,
  author =	{Da Cunha, Arthur Carvalho Walraven and d'Amore, Francesco and Giroire, Fr\'{e}d\'{e}ric and Lesfari, Hicham and Natale, Emanuele and Viennot, Laurent},
  title =	{{Revisiting the Random Subset Sum Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{37:1--37:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.37},
  URN =		{urn:nbn:de:0030-drops-186905},
  doi =		{10.4230/LIPIcs.ESA.2023.37},
  annote =	{Keywords: Random subset sum, Randomised method, Subset-sum, Combinatorics}
}
Document
Scheduling with a Limited Testing Budget: Tight Results for the Offline and Oblivious Settings

Authors: Christoph Damerius, Peter Kling, Minming Li, Chenyang Xu, and Ruilong Zhang


Abstract
Scheduling with testing falls under the umbrella of the research on optimization with explorable uncertainty. In this model, each job has an upper limit on its processing time that can be decreased to a lower limit (possibly unknown) by some preliminary action (testing). Recently, [Christoph Dürr et al., 2020] has studied a setting where testing a job takes a unit time, and the goal is to minimize total completion time or makespan on a single machine. In this paper, we extend their problem to the budget setting in which each test consumes a job-specific cost, and we require that the total testing cost cannot exceed a given budget. We consider the offline variant (the lower processing time is known) and the oblivious variant (the lower processing time is unknown) and aim to minimize the total completion time or makespan on a single machine. For the total completion time objective, we show NP-hardness and derive a PTAS for the offline variant based on a novel LP rounding scheme. We give a (4+ε)-competitive algorithm for the oblivious variant based on a framework inspired by the worst-case lower-bound instance. For the makespan objective, we give an FPTAS for the offline variant and a (2+ε)-competitive algorithm for the oblivious variant. Our algorithms for the oblivious variants under both objectives run in time 𝒪(poly(n/ε)). Lastly, we show that our results are essentially optimal by providing matching lower bounds.

Cite as

Christoph Damerius, Peter Kling, Minming Li, Chenyang Xu, and Ruilong Zhang. Scheduling with a Limited Testing Budget: Tight Results for the Offline and Oblivious Settings. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 38:1-38:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{damerius_et_al:LIPIcs.ESA.2023.38,
  author =	{Damerius, Christoph and Kling, Peter and Li, Minming and Xu, Chenyang and Zhang, Ruilong},
  title =	{{Scheduling with a Limited Testing Budget: Tight Results for the Offline and Oblivious Settings}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{38:1--38:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.38},
  URN =		{urn:nbn:de:0030-drops-186915},
  doi =		{10.4230/LIPIcs.ESA.2023.38},
  annote =	{Keywords: scheduling, total completion time, makespan, LP rounding, competitive analysis, approximation algorithm, NP hardness, PTAS}
}
Document
A (3/2 + ε)-Approximation for Multiple TSP with a Variable Number of Depots

Authors: Max Deppert, Matthias Kaul, and Matthias Mnich


Abstract
One of the most studied extensions of the famous Traveling Salesperson Problem (TSP) is the Multiple TSP: a set of m ≥ 1 salespersons collectively traverses a set of n cities by m non-trivial tours, to minimize the total length of their tours. This problem can also be considered to be a variant of Uncapacitated Vehicle Routing, where the objective is to minimize the sum of all tour lengths. When all m tours start from and end at a single common depot v₀, then the metric Multiple TSP can be approximated equally well as the standard metric TSP, as shown by Frieze (1983). The metric Multiple TSP becomes significantly harder to approximate when there is a set D of d ≥ 1 depots that form the starting and end points of the m tours. For this case, only a (2-1/d)-approximation in polynomial time is known, as well as a 3/2-approximation for constant d which requires a prohibitive run time of n^Θ(d) (Xu and Rodrigues, INFORMS J. Comput., 2015). A recent work of Traub, Vygen and Zenklusen (STOC 2020) gives another approximation algorithm for metric Multiple TSP with run time n^Θ(d), which reduces the problem to approximating metric TSP. In this paper we overcome the n^Θ(d) time barrier: we give the first efficient approximation algorithm for Multiple TSP with a variable number d of depots that yields a better-than-2 approximation. Our algorithm runs in time (1/ε)^O(dlog d) ⋅ n^O(1), and produces a (3/2+ε)-approximation with constant probability. For the graphic case, we obtain a deterministic 3/2-approximation in time 2^d ⋅ n^O(1).

Cite as

Max Deppert, Matthias Kaul, and Matthias Mnich. A (3/2 + ε)-Approximation for Multiple TSP with a Variable Number of Depots. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 39:1-39:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{deppert_et_al:LIPIcs.ESA.2023.39,
  author =	{Deppert, Max and Kaul, Matthias and Mnich, Matthias},
  title =	{{A (3/2 + \epsilon)-Approximation for Multiple TSP with a Variable Number of Depots}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{39:1--39:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.39},
  URN =		{urn:nbn:de:0030-drops-186925},
  doi =		{10.4230/LIPIcs.ESA.2023.39},
  annote =	{Keywords: Traveling salesperson problem, rural postperson problem, multiple TSP, vehicle routing}
}
Document
Efficient Parallel Output-Sensitive Edit Distance

Authors: Xiangyun Ding, Xiaojun Dong, Yan Gu, Youzhe Liu, and Yihan Sun


Abstract
In this paper, we study efficient parallel edit distance algorithms, both in theory and in practice. Given two strings A[1..n] and B[1..m], and a set of operations allowed to edit the strings, the edit distance between A and B is the minimum number of operations required to transform A into B. In this paper, we use edit distance to refer to the Levenshtein distance, which allows for unit-cost single-character edits (insertions, deletions, substitutions). Sequentially, a standard Dynamic Programming (DP) algorithm solves edit distance with Θ(nm) cost. In many real-world applications, the strings to be compared are similar to each other and have small edit distances. To achieve highly practical implementations, we focus on output-sensitive parallel edit-distance algorithms, i.e., to achieve asymptotically better cost bounds than the standard Θ(nm) algorithm when the edit distance is small. We study four algorithms in the paper, including three algorithms based on Breadth-First Search (BFS), and one algorithm based on Divide-and-Conquer (DaC). Our BFS-based solution is based on the Landau-Vishkin algorithm. We implement three different data structures for the longest common prefix (LCP) queries needed in the algorithm: the classic solution using parallel suffix array, and two hash-based solutions proposed in this paper. Our DaC-based solution is inspired by the output-insensitive solution proposed by Apostolico et al., and we propose a non-trivial adaption to make it output-sensitive. All of the algorithms studied in this paper have good theoretical guarantees, and they achieve different tradeoffs between work (total number of operations), span (longest dependence chain in the computation), and space. We test and compare our algorithms on both synthetic data and real-world data, including DNA sequences, Wikipedia texts, GitHub repositories, etc. Our BFS-based algorithms outperform the existing parallel edit-distance implementation in ParlayLib in all test cases. On cases with fewer than 10⁵ edits, our algorithm can process input sequences of size 10⁹ in about ten seconds, while ParlayLib can only process sequences of sizes up to 10⁶ in the same amount of time. By comparing our algorithms, we also provide a better understanding of the choice of algorithms for different input patterns. We believe that our paper is the first systematic study in the theory and practice of parallel edit distance.

Cite as

Xiangyun Ding, Xiaojun Dong, Yan Gu, Youzhe Liu, and Yihan Sun. Efficient Parallel Output-Sensitive Edit Distance. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 40:1-40:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2023.40,
  author =	{Ding, Xiangyun and Dong, Xiaojun and Gu, Yan and Liu, Youzhe and Sun, Yihan},
  title =	{{Efficient Parallel Output-Sensitive Edit Distance}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{40:1--40:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.40},
  URN =		{urn:nbn:de:0030-drops-186935},
  doi =		{10.4230/LIPIcs.ESA.2023.40},
  annote =	{Keywords: Edit Distance, Parallel Algorithms, String Algorithms, Dynamic Programming, Pattern Matching}
}
Document
Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences

Authors: Ming Ding and Peng Zhang


Abstract
We present efficient algorithms for solving systems of linear equations in 1-Laplacians of well-shaped simplicial complexes. 1-Laplacians, or higher-dimensional Laplacians, generalize graph Laplacians to higher-dimensional simplicial complexes and play a key role in computational topology and topological data analysis. Previously, nearly-linear time solvers were developed for simplicial complexes with known collapsing sequences and bounded Betti numbers, such as those triangulating a three-ball in ℝ³ (Cohen, Fasy, Miller, Nayyeri, Peng, and Walkington [SODA'2014], Black, Maxwell, Nayyeri, and Winkelman [SODA'2022], Black and Nayyeri [ICALP'2022]). Furthermore, Nested Dissection provides quadratic time solvers for more general systems with nonzero structures representing well-shaped simplicial complexes embedded in ℝ³. We generalize the specialized solvers for 1-Laplacians to simplicial complexes with additional geometric structures but without collapsing sequences and bounded Betti numbers, and we improve the runtime of Nested Dissection. We focus on simplicial complexes that meet two conditions: (1) each individual simplex has a bounded aspect ratio, and (2) they can be divided into "disjoint" and balanced regions with well-shaped interiors and boundaries. Our solvers draw inspiration from the Incomplete Nested Dissection for stiffness matrices of well-shaped trusses (Kyng, Peng, Schwieterman, and Zhang [STOC'2018]).

Cite as

Ming Ding and Peng Zhang. Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2023.41,
  author =	{Ding, Ming and Zhang, Peng},
  title =	{{Efficient 1-Laplacian Solvers for Well-Shaped Simplicial Complexes: Beyond Betti Numbers and Collapsing Sequences}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.41},
  URN =		{urn:nbn:de:0030-drops-186947},
  doi =		{10.4230/LIPIcs.ESA.2023.41},
  annote =	{Keywords: 1-Laplacian Solvers, Simplicial Complexes, Incomplete Nested Dissection}
}
Document
Optimal Energetic Paths for Electric Cars

Authors: Dani Dorfman, Haim Kaplan, Robert E. Tarjan, and Uri Zwick


Abstract
A weighted directed graph G = (V,A,c), where A ⊆ V× V and c:A → ℝ, naturally describes a road network in which an electric car, or vehicle (EV), can roam. An arc uv ∈ A models a road segment connecting the two vertices (junctions) u and v. The cost c(uv) of the arc uv is the amount of energy the car needs to travel from u to v. This amount can be positive, zero or negative. We consider both the more realistic scenario where there are no negative cycles in the graph, as well as the more challenging scenario, which can also be motivated, where negative cycles may be present. The electric car has a battery that can store up to B units of energy. The car can traverse an arc uv ∈ A only if it is at u and the charge b in its battery satisfies b ≥ c(uv). If the car traverses the arc uv then it reaches v with a charge of min{b-c(uv),B} in its battery. Arcs with a positive cost deplete the battery while arcs with negative costs may charge the battery, but not above its capacity of B. If the car is at a vertex u and cannot traverse any outgoing arcs of u, then it is stuck and cannot continue traveling. We consider the following natural problem: Given two vertices s,t ∈ V, can the car travel from s to t, starting at s with an initial charge b, where 0 ≤ b ≤ B? If so, what is the maximum charge with which the car can reach t? Equivalently, what is the smallest depletion δ_{B,b}(s,t) such that the car can reach t with a charge of b-δ_{B,b}(s,t) in its battery, and which path should the car follow to achieve this? We also refer to δ_{B,b}(s,t) as the energetic cost of traveling from s to t. We let δ_{B,b}(s,t) = ∞ if the car cannot travel from s to t starting with an initial charge of b. The problem of computing energetic costs is a strict generalization of the standard shortest paths problem. When there are no negative cycles, the single-source version of the problem can be solved using simple adaptations of the classical Bellman-Ford and Dijkstra algorithms. More involved algorithms are required when the graph may contain negative cycles.

Cite as

Dani Dorfman, Haim Kaplan, Robert E. Tarjan, and Uri Zwick. Optimal Energetic Paths for Electric Cars. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 42:1-42:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ESA.2023.42,
  author =	{Dorfman, Dani and Kaplan, Haim and Tarjan, Robert E. and Zwick, Uri},
  title =	{{Optimal Energetic Paths for Electric Cars}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{42:1--42:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.42},
  URN =		{urn:nbn:de:0030-drops-186955},
  doi =		{10.4230/LIPIcs.ESA.2023.42},
  annote =	{Keywords: Electric cars, Optimal Paths, Battery depletion}
}
Document
Evaluating Restricted First-Order Counting Properties on Nowhere Dense Classes and Beyond

Authors: Jan Dreier, Daniel Mock, and Peter Rossmanith


Abstract
It is known that first-order logic with some counting extensions can be efficiently evaluated on graph classes with bounded expansion, where depth-r minors have constant density. More precisely, the formulas are ∃ x₁… x_k#y φ(x_1,…,x_k, y) > N, where φ is an FO-formula. If φ is quantifier-free, we can extend this result to nowhere dense graph classes with an almost linear FPT run time. Lifting this result further to slightly more general graph classes, namely almost nowhere dense classes, where the size of depth-r clique minors is subpolynomial, is impossible unless FPT = W[1]. On the other hand, in almost nowhere dense classes we can approximate such counting formulas with a small additive error. Note those counting formulas are contained in FOC({>}) but not FOC₁(𝐏). In particular, it follows that partial covering problems, such as partial dominating set, have fixed parameter algorithms on nowhere dense graph classes with almost linear running time.

Cite as

Jan Dreier, Daniel Mock, and Peter Rossmanith. Evaluating Restricted First-Order Counting Properties on Nowhere Dense Classes and Beyond. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 43:1-43:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dreier_et_al:LIPIcs.ESA.2023.43,
  author =	{Dreier, Jan and Mock, Daniel and Rossmanith, Peter},
  title =	{{Evaluating Restricted First-Order Counting Properties on Nowhere Dense Classes and Beyond}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{43:1--43:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.43},
  URN =		{urn:nbn:de:0030-drops-186961},
  doi =		{10.4230/LIPIcs.ESA.2023.43},
  annote =	{Keywords: nowhere dense, sparsity, counting logic, dominating set, FPT}
}
Document
Online Algorithms with Randomly Infused Advice

Authors: Yuval Emek, Yuval Gil, Maciej Pacut, and Stefan Schmid


Abstract
We introduce a novel method for the rigorous quantitative evaluation of online algorithms that relaxes the "radical worst-case" perspective of classic competitive analysis. In contrast to prior work, our method, referred to as randomly infused advice (RIA), does not make any assumptions about the input sequence and does not rely on the development of designated online algorithms. Rather, it can be applied to existing online randomized algorithms, introducing a means to evaluate their performance in scenarios that lie outside the radical worst-case regime. More concretely, an online algorithm ALG with RIA benefits from pieces of advice generated by an omniscient but not entirely reliable oracle. The crux of the new method is that the advice is provided to ALG by writing it into the buffer ℬ from which ALG normally reads its random bits, hence allowing us to augment it through a very simple and non-intrusive interface. The (un)reliability of the oracle is captured via a parameter 0 ≤ α ≤ 1 that determines the probability (per round) that the advice is successfully infused by the oracle; if the advice is not infused, which occurs with probability 1 - α, then the buffer ℬ contains fresh random bits (as in the classic online setting). The applicability of the new RIA method is demonstrated by applying it to three extensively studied online problems: paging, uniform metrical task systems, and online set cover. For these problems, we establish new upper bounds on the competitive ratio of classic online algorithms that improve as the infusion parameter α increases. These are complemented with (often tight) lower bounds on the competitive ratio of online algorithms with RIA for the three problems.

Cite as

Yuval Emek, Yuval Gil, Maciej Pacut, and Stefan Schmid. Online Algorithms with Randomly Infused Advice. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 44:1-44:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{emek_et_al:LIPIcs.ESA.2023.44,
  author =	{Emek, Yuval and Gil, Yuval and Pacut, Maciej and Schmid, Stefan},
  title =	{{Online Algorithms with Randomly Infused Advice}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{44:1--44:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.44},
  URN =		{urn:nbn:de:0030-drops-186970},
  doi =		{10.4230/LIPIcs.ESA.2023.44},
  annote =	{Keywords: Online algorithms, competitive analysis, advice}
}
Document
The Lawn Mowing Problem: From Algebra to Algorithms

Authors: Sándor P. Fekete, Dominik Krupke, Michael Perk, Christian Rieck, and Christian Scheffer


Abstract
For a given polygonal region P, the Lawn Mowing Problem (LMP) asks for a shortest tour T that gets within Euclidean distance 1/2 of every point in P; this is equivalent to computing a shortest tour for a unit-diameter cutter C that covers all of P. As a generalization of the Traveling Salesman Problem, the LMP is NP-hard; unlike the discrete TSP, however, the LMP has defied efforts to achieve exact solutions, due to its combination of combinatorial complexity with continuous geometry. We provide a number of new contributions that provide insights into the involved difficulties, as well as positive results that enable both theoretical and practical progress. (1) We show that the LMP is algebraically hard: it is not solvable by radicals over the field of rationals, even for the simple case in which P is a 2×2 square. This implies that it is impossible to compute exact optimal solutions under models of computation that rely on elementary arithmetic operations and the extraction of kth roots, and explains the perceived practical difficulty. (2) We exploit this algebraic analysis for the natural class of polygons with axis-parallel edges and integer vertices (i.e., polyominoes), highlighting the relevance of turn-cost minimization for Lawn Mowing tours, and leading to a general construction method for feasible tours. (3) We show that this construction method achieves theoretical worst-case guarantees that improve previous approximation factors for polyominoes. (4) We demonstrate the practical usefulness beyond polyominoes by performing an extensive practical study on a spectrum of more general benchmark polygons: We obtain solutions that are better than the previous best values by Fekete et al., for instance sizes up to 20 times larger.

Cite as

Sándor P. Fekete, Dominik Krupke, Michael Perk, Christian Rieck, and Christian Scheffer. The Lawn Mowing Problem: From Algebra to Algorithms. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 45:1-45:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.ESA.2023.45,
  author =	{Fekete, S\'{a}ndor P. and Krupke, Dominik and Perk, Michael and Rieck, Christian and Scheffer, Christian},
  title =	{{The Lawn Mowing Problem: From Algebra to Algorithms}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{45:1--45:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.45},
  URN =		{urn:nbn:de:0030-drops-186985},
  doi =		{10.4230/LIPIcs.ESA.2023.45},
  annote =	{Keywords: Geometric optimization, covering problems, tour problems, lawn mowing, algebraic hardness, approximation algorithms, algorithm engineering}
}
Document
Learned Monotone Minimal Perfect Hashing

Authors: Paolo Ferragina, Hans-Peter Lehmann, Peter Sanders, and Giorgio Vinciguerra


Abstract
A Monotone Minimal Perfect Hash Function (MMPHF) constructed on a set S of keys is a function that maps each key in S to its rank. On keys not in S, the function returns an arbitrary value. Applications range from databases, search engines, data encryption, to pattern-matching algorithms. In this paper, we describe LeMonHash, a new technique for constructing MMPHFs for integers. The core idea of LeMonHash is surprisingly simple and effective: we learn a monotone mapping from keys to their rank via an error-bounded piecewise linear model (the PGM-index), and then we solve the collisions that might arise among keys mapping to the same rank estimate by associating small integers with them in a retrieval data structure (BuRR). On synthetic random datasets, LeMonHash needs 34% less space than the next larger competitor, while achieving about 16 times faster queries. On real-world datasets, the space usage is very close to or much better than the best competitors, while achieving up to 19 times faster queries than the next larger competitor. As far as the construction of LeMonHash is concerned, we get an improvement by a factor of up to 2, compared to the competitor with the next best space usage. We also investigate the case of keys being variable-length strings, introducing the so-called LeMonHash-VL: it needs space within 13% of the best competitors while achieving up to 3 times faster queries than the next larger competitor.

Cite as

Paolo Ferragina, Hans-Peter Lehmann, Peter Sanders, and Giorgio Vinciguerra. Learned Monotone Minimal Perfect Hashing. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 46:1-46:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ferragina_et_al:LIPIcs.ESA.2023.46,
  author =	{Ferragina, Paolo and Lehmann, Hans-Peter and Sanders, Peter and Vinciguerra, Giorgio},
  title =	{{Learned Monotone Minimal Perfect Hashing}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{46:1--46:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.46},
  URN =		{urn:nbn:de:0030-drops-186990},
  doi =		{10.4230/LIPIcs.ESA.2023.46},
  annote =	{Keywords: compressed data structure, monotone minimal perfect hashing, retrieval}
}
Document
Correlating Theory and Practice in Finding Clubs and Plexes

Authors: Aleksander Figiel, Tomohiro Koana, André Nichterlein, and Niklas Wünsche


Abstract
For solving NP-hard problems there is often a huge gap between theoretical guarantees and observed running times on real-world instances. As a first step towards tackling this issue, we propose an approach to quantify the correlation between theoretical and observed running times. We use two NP-hard problems related to finding large "cliquish" subgraphs in a given graph as demonstration of this measure. More precisely, we focus on finding maximum s-clubs and s-plexes, i. e., graphs of diameter s and graphs where each vertex is adjacent to all but s vertices. Preprocessing based on Turing kernelization is a standard tool to tackle these problems, especially on sparse graphs. We provide a parameterized analysis for the Turing kernelization and demonstrate their usefulness in practice. Moreover, we demonstrate that our measure indeed captures the correlation between these new theoretical and the observed running times.

Cite as

Aleksander Figiel, Tomohiro Koana, André Nichterlein, and Niklas Wünsche. Correlating Theory and Practice in Finding Clubs and Plexes. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{figiel_et_al:LIPIcs.ESA.2023.47,
  author =	{Figiel, Aleksander and Koana, Tomohiro and Nichterlein, Andr\'{e} and W\"{u}nsche, Niklas},
  title =	{{Correlating Theory and Practice in Finding Clubs and Plexes}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.47},
  URN =		{urn:nbn:de:0030-drops-187000},
  doi =		{10.4230/LIPIcs.ESA.2023.47},
  annote =	{Keywords: Preprocessing, Turing kernelization, Pearson correlation coefficient}
}
Document
Kernelization for Spreading Points

Authors: Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Saket Saurabh, and Meirav Zehavi


Abstract
We consider the following problem about dispersing points. Given a set of points in the plane, the task is to identify whether by moving a small number of points by small distance, we can obtain an arrangement of points such that no pair of points is "close" to each other. More precisely, for a family of n points, an integer k, and a real number d > 0, we ask whether at most k points could be relocated, each point at distance at most d from its original location, such that the distance between each pair of points is at least a fixed constant, say 1. A number of approximation algorithms for variants of this problem, under different names like distant representatives, disk dispersing, or point spreading, are known in the literature. However, to the best of our knowledge, the parameterized complexity of this problem remains widely unexplored. We make the first step in this direction by providing a kernelization algorithm that, in polynomial time, produces an equivalent instance with 𝒪(d²k³) points. As a byproduct of this result, we also design a non-trivial fixed-parameter tractable (FPT) algorithm for the problem, parameterized by k and d. Finally, we complement the result about polynomial kernelization by showing a lower bound that rules out the existence of a kernel whose size is polynomial in k alone, unless NP ⊆ coNP/poly.

Cite as

Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Saket Saurabh, and Meirav Zehavi. Kernelization for Spreading Points. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 48:1-48:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2023.48,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Kernelization for Spreading Points}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{48:1--48:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.48},
  URN =		{urn:nbn:de:0030-drops-187017},
  doi =		{10.4230/LIPIcs.ESA.2023.48},
  annote =	{Keywords: parameterized algorithms, kernelization, spreading points, distant representatives, unit disk packing}
}
Document
Lossy Kernelization for (Implicit) Hitting Set Problems

Authors: Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi


Abstract
We re-visit the complexity of polynomial time pre-processing (kernelization) for the d-Hitting Set problem. This is one of the most classic problems in Parameterized Complexity by itself, and, furthermore, it encompasses several other of the most well-studied problems in this field, such as Vertex Cover, Feedback Vertex Set in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, d-Hitting Set encompasses any deletion problem to a hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size, the kernelization complexity of d-Hitting Set is essentially settled: there exists a kernel with 𝒪(k^d) bits (𝒪(k^d) sets and 𝒪(k^{d-1}) elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. Still, the question of whether there exists a kernel for d-Hitting Set with fewer elements has remained one of the most major open problems in Kernelization. In this paper, we first show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed d. Further, based on this, we present our main result: we show that there exist approximate Turing kernelizations for d-Hitting Set that even beat the established bit-size lower bounds for exact kernelizations - in fact, we use a constant number of oracle calls, each with "near linear" (𝒪(k^{1+ε})) bit size, that is, almost the best one could hope for. Lastly, for two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the "best of both worlds" type of results - (1+ε)-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments.

Cite as

Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi. Lossy Kernelization for (Implicit) Hitting Set Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2023.49,
  author =	{Fomin, Fedor V. and Le, Tien-Nam and Lokshtanov, Daniel and Saurabh, Saket and Thomass\'{e}, St\'{e}phan and Zehavi, Meirav},
  title =	{{Lossy Kernelization for (Implicit) Hitting Set Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.49},
  URN =		{urn:nbn:de:0030-drops-187020},
  doi =		{10.4230/LIPIcs.ESA.2023.49},
  annote =	{Keywords: Hitting Set, Lossy Kernelization}
}
Document
Bootstrapping Dynamic Distance Oracles

Authors: Sebastian Forster, Gramoz Goranci, Yasamin Nazari, and Antonis Skarlatos


Abstract
Designing approximate all-pairs distance oracles in the fully dynamic setting is one of the central problems in dynamic graph algorithms. Despite extensive research on this topic, the first result breaking the O(√n) barrier on the update time for any non-trivial approximation was introduced only recently by Forster, Goranci and Henzinger [SODA'21] who achieved m^{1/ρ+o(1)} amortized update time with a O(log n)^{3ρ-2} factor in the approximation ratio, for any parameter ρ ≥ 1. In this paper, we give the first constant-stretch fully dynamic distance oracle with small polynomial update and query time. Prior work required either at least a poly-logarithmic approximation or much larger update time. Our result gives a more fine-grained trade-off between stretch and update time, for instance we can achieve constant stretch of O(1/(ρ²))^{4/ρ} in amortized update time Õ(n^{ρ}), and query time Õ(n^{ρ/8}) for any constant parameter 0 < ρ < 1. Our algorithm is randomized and assumes an oblivious adversary. A core technical idea underlying our construction is to design a black-box reduction from decremental approximate hub-labeling schemes to fully dynamic distance oracles, which may be of independent interest. We then apply this reduction repeatedly to an existing decremental algorithm to bootstrap our fully dynamic solution.

Cite as

Sebastian Forster, Gramoz Goranci, Yasamin Nazari, and Antonis Skarlatos. Bootstrapping Dynamic Distance Oracles. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 50:1-50:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{forster_et_al:LIPIcs.ESA.2023.50,
  author =	{Forster, Sebastian and Goranci, Gramoz and Nazari, Yasamin and Skarlatos, Antonis},
  title =	{{Bootstrapping Dynamic Distance Oracles}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{50:1--50:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.50},
  URN =		{urn:nbn:de:0030-drops-187031},
  doi =		{10.4230/LIPIcs.ESA.2023.50},
  annote =	{Keywords: Dynamic graph algorithms, Distance Oracles, Shortest Paths}
}
Document
A Sweep-Plane Algorithm for Calculating the Isolation of Mountains

Authors: Daniel Funke, Nicolai Hüning, and Peter Sanders


Abstract
One established metric to classify the significance of a mountain peak is its isolation. It specifies the distance between a peak and the closest point of higher elevation. Peaks with high isolation dominate their surroundings and provide a nice view from the top. With the availability of worldwide Digital Elevation Models (DEMs), the isolation of all mountain peaks can be computed automatically. Previous algorithms run in worst case time that is quadratic in the input size. We present a novel sweep-plane algorithm that runs in time 𝒪(nlog n+pT_NN) where n is the input size, p the number of considered peaks and T_NN the time for a 2D nearest-neighbor query in an appropriate geometric search tree. We refine this to a two-level approach that has high locality and good parallel scalability. Our implementation reduces the time for calculating the isolation of every peak on Earth from hours to minutes while improving precision.

Cite as

Daniel Funke, Nicolai Hüning, and Peter Sanders. A Sweep-Plane Algorithm for Calculating the Isolation of Mountains. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{funke_et_al:LIPIcs.ESA.2023.51,
  author =	{Funke, Daniel and H\"{u}ning, Nicolai and Sanders, Peter},
  title =	{{A Sweep-Plane Algorithm for Calculating the Isolation of Mountains}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.51},
  URN =		{urn:nbn:de:0030-drops-187040},
  doi =		{10.4230/LIPIcs.ESA.2023.51},
  annote =	{Keywords: computational geometry, Geo-information systems, sweepline algorithms}
}
Document
A Tight Competitive Ratio for Online Submodular Welfare Maximization

Authors: Amit Ganz, Pranav Nuti, and Roy Schwartz


Abstract
In this paper we consider the online Submodular Welfare (SW) problem. In this problem we are given n bidders each equipped with a general non-negative (not necessarily monotone) submodular utility and m items that arrive online. The goal is to assign each item, once it arrives, to a bidder or discard it, while maximizing the sum of utilities. When an adversary determines the items' arrival order we present a simple randomized algorithm that achieves a tight competitive ratio of 1/4. The algorithm is a specialization of an algorithm due to [Harshaw-Kazemi-Feldman-Karbasi MOR`22], who presented the previously best known competitive ratio of 3-2√2≈ 0.171573 to the problem. When the items' arrival order is uniformly random, we present a competitive ratio of ≈ 0.27493, improving the previously known 1/4 guarantee. Our approach for the latter result is based on a better analysis of the (offline) Residual Random Greedy (RRG) algorithm of [Buchbinder-Feldman-Naor-Schwartz SODA`14], which we believe might be of independent interest.

Cite as

Amit Ganz, Pranav Nuti, and Roy Schwartz. A Tight Competitive Ratio for Online Submodular Welfare Maximization. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ganz_et_al:LIPIcs.ESA.2023.52,
  author =	{Ganz, Amit and Nuti, Pranav and Schwartz, Roy},
  title =	{{A Tight Competitive Ratio for Online Submodular Welfare Maximization}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.52},
  URN =		{urn:nbn:de:0030-drops-187052},
  doi =		{10.4230/LIPIcs.ESA.2023.52},
  annote =	{Keywords: Online Algorithms, Submodular Maximization, Welfare Maximization, Approximation Algorithms}
}
Document
First Order Logic and Twin-Width in Tournaments

Authors: Colin Geniet and Stéphan Thomassé


Abstract
We characterise the classes of tournaments with tractable first-order model checking. For every hereditary class of tournaments T, first-order model checking either is fixed parameter tractable, or is AW[*]-hard. This dichotomy coincides with the fact that T has either bounded or unbounded twin-width, and that the growth of T is either at most exponential or at least factorial. From the model-theoretic point of view, we show that NIP classes of tournaments coincide with bounded twin-width. Twin-width is also characterised by three infinite families of obstructions: T has bounded twin-width if and only if it excludes at least one tournament from each family. This generalises results of Bonnet et al. on ordered graphs. The key for these results is a polynomial time algorithm which takes as input a tournament T and computes a linear order < on V(T) such that the twin-width of the birelation (T, <) is at most some function of the twin-width of T. Since approximating twin-width can be done in FPT time for an ordered structure (T, <), this provides a FPT approximation of twin-width for tournaments.

Cite as

Colin Geniet and Stéphan Thomassé. First Order Logic and Twin-Width in Tournaments. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 53:1-53:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{geniet_et_al:LIPIcs.ESA.2023.53,
  author =	{Geniet, Colin and Thomass\'{e}, St\'{e}phan},
  title =	{{First Order Logic and Twin-Width in Tournaments}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{53:1--53:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.53},
  URN =		{urn:nbn:de:0030-drops-187061},
  doi =		{10.4230/LIPIcs.ESA.2023.53},
  annote =	{Keywords: Tournaments, twin-width, first-order logic, model checking, NIP, small classes}
}
Document
Improved Approximation Algorithms for the Expanding Search Problem

Authors: Svenja M. Griesbach, Felix Hommelsheim, Max Klimm, and Kevin Schewior


Abstract
A searcher faces a graph with edge lengths and vertex weights, initially having explored only a given starting vertex. In each step, the searcher adds an edge to the solution that connects an unexplored vertex to an explored vertex. This requires an amount of time equal to the edge length. The goal is to minimize the weighted sum of the exploration times over all vertices. We show that this problem is hard to approximate and provide algorithms with improved approximation guarantees. For the general case, we give a (2e+ε)-approximation for any ε > 0. For the case that all vertices have unit weight, we provide a 2e-approximation. Finally, we provide a PTAS for the case of a Euclidean graph. Previously, for all cases only an 8-approximation was known.

Cite as

Svenja M. Griesbach, Felix Hommelsheim, Max Klimm, and Kevin Schewior. Improved Approximation Algorithms for the Expanding Search Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 54:1-54:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{griesbach_et_al:LIPIcs.ESA.2023.54,
  author =	{Griesbach, Svenja M. and Hommelsheim, Felix and Klimm, Max and Schewior, Kevin},
  title =	{{Improved Approximation Algorithms for the Expanding Search Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{54:1--54:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.54},
  URN =		{urn:nbn:de:0030-drops-187073},
  doi =		{10.4230/LIPIcs.ESA.2023.54},
  annote =	{Keywords: Approximation Algorithm, Expanding Search, Search Problem, Graph Exploration, Traveling Repairperson Problem}
}
Document
Noisy k-Means++ Revisited

Authors: Christoph Grunau, Ahmet Alper Özüdoğru, and Václav Rozhoň


Abstract
The k-means++ algorithm by Arthur and Vassilvitskii [SODA 2007] is a classical and time-tested algorithm for the k-means problem. While being very practical, the algorithm also has good theoretical guarantees: its solution is O(log k)-approximate, in expectation. In a recent work, Bhattacharya, Eube, Roglin, and Schmidt [ESA 2020] considered the following question: does the algorithm retain its guarantees if we allow for a slight adversarial noise in the sampling probability distributions used by the algorithm? This is motivated e.g. by the fact that computations with real numbers in k-means++ implementations are inexact. Surprisingly, the analysis under this scenario gets substantially more difficult and the authors were able to prove only a weaker approximation guarantee of O(log² k). In this paper, we close the gap by providing a tight, O(log k)-approximate guarantee for the k-means++ algorithm with noise.

Cite as

Christoph Grunau, Ahmet Alper Özüdoğru, and Václav Rozhoň. Noisy k-Means++ Revisited. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 55:1-55:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{grunau_et_al:LIPIcs.ESA.2023.55,
  author =	{Grunau, Christoph and \"{O}z\"{u}do\u{g}ru, Ahmet Alper and Rozho\v{n}, V\'{a}clav},
  title =	{{Noisy k-Means++ Revisited}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{55:1--55:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.55},
  URN =		{urn:nbn:de:0030-drops-187080},
  doi =		{10.4230/LIPIcs.ESA.2023.55},
  annote =	{Keywords: clustering, k-means, k-means++, adversarial noise}
}
Document
Convergence to Lexicographically Optimal Base in a (Contra)Polymatroid and Applications to Densest Subgraph and Tree Packing

Authors: Elfarouk Harb, Kent Quanrud, and Chandra Chekuri


Abstract
Boob et al. [Boob et al., 2020] described an iterative peeling algorithm called Greedy++ for the Densest Subgraph Problem (DSG) and conjectured that it converges to an optimum solution. Chekuri, Qaunrud and Torres [Chandra Chekuri et al., 2022] extended the algorithm to supermodular density problems (of which DSG is a special case) and proved that the resulting algorithm Super-Greedy++ (and hence also Greedy++) converges. In this paper we revisit the convergence proof and provide a different perspective. This is done via a connection to Fujishige’s quadratic program for finding a lexicographically optimal base in a (contra) polymatroid [Satoru Fujishige, 1980], and a noisy version of the Frank-Wolfe method from convex optimization [Frank and Wolfe, 1956; Jaggi, 2013]. This yields a simpler convergence proof, and also shows a stronger property that Super-Greedy++ converges to the optimal dense decomposition vector, answering a question raised in Harb et al. [Harb et al., 2022]. A second contribution of the paper is to understand Thorup’s work on ideal tree packing and greedy tree packing [Thorup, 2007; Thorup, 2008] via the Frank-Wolfe algorithm applied to find a lexicographically optimum base in the graphic matroid. This yields a simpler and transparent proof. The two results appear disparate but are unified via Fujishige’s result and convex optimization.

Cite as

Elfarouk Harb, Kent Quanrud, and Chandra Chekuri. Convergence to Lexicographically Optimal Base in a (Contra)Polymatroid and Applications to Densest Subgraph and Tree Packing. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 56:1-56:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{harb_et_al:LIPIcs.ESA.2023.56,
  author =	{Harb, Elfarouk and Quanrud, Kent and Chekuri, Chandra},
  title =	{{Convergence to Lexicographically Optimal Base in a (Contra)Polymatroid and Applications to Densest Subgraph and Tree Packing}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{56:1--56:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.56},
  URN =		{urn:nbn:de:0030-drops-187091},
  doi =		{10.4230/LIPIcs.ESA.2023.56},
  annote =	{Keywords: Polymatroid, lexicographically optimum base, densest subgraph, tree packing}
}
Document
Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication

Authors: David G. Harris


Abstract
Karppa & Kaski (2019) proposed a novel type of "broken" or "opportunistic" multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. For instance, their algorithm can compute Boolean matrix multiplication in O(n^log₂(6+6/7) log n) = O(n^2.778) time. While faster matrix multiplication algorithms exist asymptotically, in practice most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken matrix multiplication algorithm to approximately compute matrix multiplication, either for real-valued matrices or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm. Asymptotically, the resulting algorithm has runtime O(n^{(3 log6)/log7} log n) ≤ O(n^2.763), a slight improvement of Karppa-Kaski’s algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, these asymptotic runtime bounds are not directly useful. We estimate the runtime for our algorithm for some sample problems which are at the upper limits of practical algorithms; it appears that for these parameters, further optimizations are still needed to make our algorithm competitive.

Cite as

David G. Harris. Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 57:1-57:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{harris:LIPIcs.ESA.2023.57,
  author =	{Harris, David G.},
  title =	{{Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{57:1--57:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.57},
  URN =		{urn:nbn:de:0030-drops-187104},
  doi =		{10.4230/LIPIcs.ESA.2023.57},
  annote =	{Keywords: Matrix multiplication, Boolean matrix multiplication, Strassen’s algorithmkeyword}
}
Document
Counting and Sampling Labeled Chordal Graphs in Polynomial Time

Authors: Úrsula Hébert-Johnson, Daniel Lokshtanov, and Eric Vigoda


Abstract
We present the first polynomial-time algorithm to exactly compute the number of labeled chordal graphs on n vertices. Our algorithm solves a more general problem: given n and ω as input, it computes the number of ω-colorable labeled chordal graphs on n vertices, using O(n⁷) arithmetic operations. A standard sampling-to-counting reduction then yields a polynomial-time exact sampler that generates an ω-colorable labeled chordal graph on n vertices uniformly at random. Our counting algorithm improves upon the previous best result by Wormald (1985), which computes the number of labeled chordal graphs on n vertices in time exponential in n. An implementation of the polynomial-time counting algorithm gives the number of labeled chordal graphs on up to 30 vertices in less than three minutes on a standard desktop computer. Previously, the number of labeled chordal graphs was only known for graphs on up to 15 vertices.

Cite as

Úrsula Hébert-Johnson, Daniel Lokshtanov, and Eric Vigoda. Counting and Sampling Labeled Chordal Graphs in Polynomial Time. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 58:1-58:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hebertjohnson_et_al:LIPIcs.ESA.2023.58,
  author =	{H\'{e}bert-Johnson, \'{U}rsula and Lokshtanov, Daniel and Vigoda, Eric},
  title =	{{Counting and Sampling Labeled Chordal Graphs in Polynomial Time}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{58:1--58:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.58},
  URN =		{urn:nbn:de:0030-drops-187119},
  doi =		{10.4230/LIPIcs.ESA.2023.58},
  annote =	{Keywords: Counting algorithms, graph sampling, chordal graphs}
}
Document
Tight Algorithms for Connectivity Problems Parameterized by Clique-Width

Authors: Falko Hegerfeld and Stefan Kratsch


Abstract
The complexity of problems involving global constraints is usually much more difficult to understand than the complexity of problems only involving local constraints. In the realm of graph problems, connectivity constraints are a natural form of global constraints. We study connectivity problems from a fine-grained parameterized perspective. In a breakthrough result, Cygan et al. (TALG 2022) first obtained Monte-Carlo algorithms with single-exponential running time α^{tw} n^𝒪(1) for connectivity problems parameterized by treewidth by introducing the cut-and-count-technique, which reduces many connectivity problems to locally checkable counting problems. Furthermore, the obtained bases α were shown to be optimal under the Strong Exponential-Time Hypothesis (SETH). However, since only sparse graphs may admit small treewidth, we lack knowledge of the fine-grained complexity of connectivity problems with respect to dense structure. The most popular graph parameter to measure dense structure is arguably clique-width, which intuitively measures how easily a graph can be constructed by repeatedly adding bicliques. Bergougnoux and Kanté (TCS 2019) have shown, using the rank-based approach, that also parameterized by clique-width many connectivity problems admit single-exponential algorithms. Unfortunately, the obtained running times are far from optimal under SETH. We show how to obtain optimal running times parameterized by clique-width for two benchmark connectivity problems, namely Connected Vertex Cover and Connected Dominating Set. These are the first tight results for connectivity problems with respect to clique-width and these results are obtained by developing new algorithms based on the cut-and-count-technique and novel lower bound constructions. Precisely, we show that there exist one-sided error Monte-Carlo algorithms that given a k-clique-expression solve - Connected Vertex Cover in time 6^k n^𝒪(1), and - Connected Dominating Set in time 5^k n^𝒪(1). Both results are shown to be tight under SETH.

Cite as

Falko Hegerfeld and Stefan Kratsch. Tight Algorithms for Connectivity Problems Parameterized by Clique-Width. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 59:1-59:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hegerfeld_et_al:LIPIcs.ESA.2023.59,
  author =	{Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{Tight Algorithms for Connectivity Problems Parameterized by Clique-Width}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{59:1--59:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.59},
  URN =		{urn:nbn:de:0030-drops-187124},
  doi =		{10.4230/LIPIcs.ESA.2023.59},
  annote =	{Keywords: Parameterized Complexity, Connectivity, Clique-width, Cut\&Count, Lower Bound}
}
Document
Pareto Sums of Pareto Sets

Authors: Demian Hespe, Peter Sanders, Sabine Storandt, and Carina Truschel


Abstract
In bi-criteria optimization problems, the goal is typically to compute the set of Pareto-optimal solutions. Many algorithms for these types of problems rely on efficient merging or combining of partial solutions and filtering of dominated solutions in the resulting sets. In this paper, we consider the task of computing the Pareto sum of two given Pareto sets A, B of size n. The Pareto sum contains all non-dominated points of the Minkowski sum M = {a+b|a ∈ A, b ∈ B}. Since the Minkowski sum has a size of n², but the Pareto sum C can be much smaller, the goal is to compute C without having to compute and store all of M. We present several new algorithms for efficient Pareto sum computation, including an output-sensitive one with a running time of 𝒪(n log n + nk) and a space consumption of 𝒪(n+k) for k = |C|. We also describe suitable engineering techniques to improve the practical running times of our algorithms and provide a comparative experimental study. As one showcase application, we consider preprocessing-based methods for bi-criteria route planning in road networks. Pareto sum computation is a frequent task in the preprocessing phase. We show that using our algorithms with an output-sensitive space consumption allows to tackle larger instances and reduces the preprocessing time compared to algorithms that fully store M.

Cite as

Demian Hespe, Peter Sanders, Sabine Storandt, and Carina Truschel. Pareto Sums of Pareto Sets. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 60:1-60:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hespe_et_al:LIPIcs.ESA.2023.60,
  author =	{Hespe, Demian and Sanders, Peter and Storandt, Sabine and Truschel, Carina},
  title =	{{Pareto Sums of Pareto Sets}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{60:1--60:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.60},
  URN =		{urn:nbn:de:0030-drops-187132},
  doi =		{10.4230/LIPIcs.ESA.2023.60},
  annote =	{Keywords: Minkowski sum, Skyline, Successive Algorithm}
}
Document
Solving Edge Clique Cover Exactly via Synergistic Data Reduction

Authors: Anthony Hevia, Benjamin Kallus, Summer McClintic, Samantha Reisner, Darren Strash, and Johnathan Wilson


Abstract
The edge clique cover (ECC) problem - where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph - is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [JEA 2009], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC and vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [Commun. ACM 1978] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to "lift" some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call synergistic data reduction) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, EO-ECC by Abdullah et al. [ICCS 2022], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem.

Cite as

Anthony Hevia, Benjamin Kallus, Summer McClintic, Samantha Reisner, Darren Strash, and Johnathan Wilson. Solving Edge Clique Cover Exactly via Synergistic Data Reduction. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 61:1-61:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hevia_et_al:LIPIcs.ESA.2023.61,
  author =	{Hevia, Anthony and Kallus, Benjamin and McClintic, Summer and Reisner, Samantha and Strash, Darren and Wilson, Johnathan},
  title =	{{Solving Edge Clique Cover Exactly via Synergistic Data Reduction}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{61:1--61:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.61},
  URN =		{urn:nbn:de:0030-drops-187148},
  doi =		{10.4230/LIPIcs.ESA.2023.61},
  annote =	{Keywords: Edge clique cover, Vertex clique cover, Data reduction, Degeneracy}
}
Document
Threshold Testing and Semi-Online Prophet Inequalities

Authors: Martin Hoefer and Kevin Schewior


Abstract
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).

Cite as

Martin Hoefer and Kevin Schewior. Threshold Testing and Semi-Online Prophet Inequalities. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 62:1-62:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.ESA.2023.62,
  author =	{Hoefer, Martin and Schewior, Kevin},
  title =	{{Threshold Testing and Semi-Online Prophet Inequalities}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{62:1--62:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.62},
  URN =		{urn:nbn:de:0030-drops-187159},
  doi =		{10.4230/LIPIcs.ESA.2023.62},
  annote =	{Keywords: Prophet Inequalities, Testing, Stochastic Probing}
}
Document
Parameterized Complexity of Fair Bisection: (FPT-Approximation meets Unbreakability)

Authors: Tanmay Inamdar, Daniel Lokshtanov, Saket Saurabh, and Vaishali Surianarayanan


Abstract
In the Minimum Bisection problem input is a graph G and the goal is to partition the vertex set into two parts A and B, such that ||A|-|B|| ≤ 1 and the number k of edges between A and B is minimized. The problem is known to be NP-hard, and assuming the Unique Games Conjecture even NP-hard to approximate within a constant factor [Khot and Vishnoi, J.ACM'15]. On the other hand, a 𝒪(log n)-approximation algorithm [Räcke, STOC'08] and a parameterized algorithm [Cygan et al., ACM Transactions on Algorithms'20] running in time k^𝒪(k) n^𝒪(1) is known. The Minimum Bisection problem can be viewed as a clustering problem where edges represent similarity and the task is to partition the vertices into two equally sized clusters while minimizing the number of pairs of similar objects that end up in different clusters. Motivated by a number of egregious examples of unfair bias in AI systems, many fundamental clustering problems have been revisited and re-formulated to incorporate fairness constraints. In this paper we initiate the study of the Minimum Bisection problem with fairness constraints. Here the input is a graph G, positive integers c and k, a function χ:V(G) → {1, …, c} that assigns a color χ(v) to each vertex v in G, and c integers r_1,r_2,⋯,r_c. The goal is to partition the vertex set of G into two almost-equal sized parts A and B with at most k edges between them, such that for each color i ∈ {1, …, c}, A has exactly r_i vertices of color i. Each color class corresponds to a group which we require the partition (A, B) to treat fairly, and the constraints that A has exactly r_i vertices of color i can be used to encode that no group is over- or under-represented in either of the two clusters. We first show that introducing fairness constraints appears to make the Minimum Bisection problem qualitatively harder. Specifically we show that unless FPT=W[1] the problem admits no f(c)n^𝒪(1) time algorithm even when k = 0. On the other hand, our main technical contribution shows that is that this hardness result is simply a consequence of the very strict requirement that each color class i has exactly r_i vertices in A. In particular we give an f(k,c,ε)n^𝒪(1) time algorithm that finds a balanced partition (A, B) with at most k edges between them, such that for each color i ∈ [c], there are at most (1±ε)r_i vertices of color i in A. Our approximation algorithm is best viewed as a proof of concept that the technique introduced by [Lampis, ICALP'18] for obtaining FPT-approximation algorithms for problems of bounded tree-width or clique-width can be efficiently exploited even on graphs of unbounded width. The key insight is that the technique of Lampis is applicable on tree decompositions with unbreakable bags (as introduced in [Cygan et al., SIAM Journal on Computing'14]). An important ingredient of our approximation scheme is a combinatorial result that may be of independent interest, namely that for every k, every graph G admits a tree decomposition with adhesions of size at most 𝒪(k), unbreakable bags, and logarithmic depth.

Cite as

Tanmay Inamdar, Daniel Lokshtanov, Saket Saurabh, and Vaishali Surianarayanan. Parameterized Complexity of Fair Bisection: (FPT-Approximation meets Unbreakability). In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 63:1-63:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{inamdar_et_al:LIPIcs.ESA.2023.63,
  author =	{Inamdar, Tanmay and Lokshtanov, Daniel and Saurabh, Saket and Surianarayanan, Vaishali},
  title =	{{Parameterized Complexity of Fair Bisection: (FPT-Approximation meets Unbreakability)}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{63:1--63:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.63},
  URN =		{urn:nbn:de:0030-drops-187167},
  doi =		{10.4230/LIPIcs.ESA.2023.63},
  annote =	{Keywords: FPT Approximation, Minimum Bisection, Unbreakable Tree Decomposition, Treewidth}
}
Document
Improved Quantum Boosting

Authors: Adam Izdebski and Ronald de Wolf


Abstract
Boosting is a general method to convert a weak learner (which generates hypotheses that are just slightly better than random) into a strong learner (which generates hypotheses that are much better than random). Recently, Arunachalam and Maity [Srinivasan Arunachalam and Reevu Maity, 2020] gave the first quantum improvement for boosting, by combining Freund and Schapire’s AdaBoost algorithm with a quantum algorithm for approximate counting. Their booster is faster than classical boosting as a function of the VC-dimension of the weak learner’s hypothesis class, but worse as a function of the quality of the weak learner. In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio’s SmoothBoost algorithm [Servedio, 2003].

Cite as

Adam Izdebski and Ronald de Wolf. Improved Quantum Boosting. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 64:1-64:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{izdebski_et_al:LIPIcs.ESA.2023.64,
  author =	{Izdebski, Adam and de Wolf, Ronald},
  title =	{{Improved Quantum Boosting}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{64:1--64:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.64},
  URN =		{urn:nbn:de:0030-drops-187178},
  doi =		{10.4230/LIPIcs.ESA.2023.64},
  annote =	{Keywords: Learning theory, Boosting algorithms, Quantum computing}
}
Document
Finding Long Directed Cycles Is Hard Even When DFVS Is Small or Girth Is Large

Authors: Ashwin Jacob, Michał Włodarczyk, and Meirav Zehavi


Abstract
We study the parameterized complexity of two classic problems on directed graphs: Hamiltonian Cycle and its generalization Longest Cycle. Since 2008, it is known that Hamiltonian Cycle is W[1]-hard when parameterized by directed treewidth [Lampis et al., ISSAC'08]. By now, the question of whether it is FPT parameterized by the directed feedback vertex set (DFVS) number has become a longstanding open problem. In particular, the DFVS number is the largest natural directed width measure studied in the literature. In this paper, we provide a negative answer to the question, showing that even for the DFVS number, the problem remains W[1]-hard. As a consequence, we also obtain that Longest Cycle is W[1]-hard on directed graphs when parameterized multiplicatively above girth, in contrast to the undirected case. This resolves an open question posed by Fomin et al. [ACM ToCT'21] and Gutin and Mnich [arXiv:2207.12278]. Our hardness results apply to the path versions of the problems as well. On the positive side, we show that Longest Path parameterized multiplicatively above girth belongs to the class XP.

Cite as

Ashwin Jacob, Michał Włodarczyk, and Meirav Zehavi. Finding Long Directed Cycles Is Hard Even When DFVS Is Small or Girth Is Large. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 65:1-65:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{jacob_et_al:LIPIcs.ESA.2023.65,
  author =	{Jacob, Ashwin and W{\l}odarczyk, Micha{\l} and Zehavi, Meirav},
  title =	{{Finding Long Directed Cycles Is Hard Even When DFVS Is Small or Girth Is Large}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{65:1--65:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.65},
  URN =		{urn:nbn:de:0030-drops-187184},
  doi =		{10.4230/LIPIcs.ESA.2023.65},
  annote =	{Keywords: Hamiltonian cycle, longest path, directed feedback vertex set, directed graphs, parameterized complexity}
}
Document
5-Approximation for ℋ-Treewidth Essentially as Fast as ℋ-Deletion Parameterized by Solution Size

Authors: Bart M. P. Jansen, Jari J. H. de Kroon, and Michał Włodarczyk


Abstract
The notion of ℋ-treewidth, where ℋ is a hereditary graph class, was recently introduced as a generalization of the treewidth of an undirected graph. Roughly speaking, a graph of ℋ-treewidth at most k can be decomposed into (arbitrarily large) ℋ-subgraphs which interact only through vertex sets of size 𝒪(k) which can be organized in a tree-like fashion. ℋ-treewidth can be used as a hybrid parameterization to develop fixed-parameter tractable algorithms for ℋ-deletion problems, which ask to find a minimum vertex set whose removal from a given graph G turns it into a member of ℋ. The bottleneck in the current parameterized algorithms lies in the computation of suitable tree ℋ-decompositions. We present FPT-approximation algorithms to compute tree ℋ-decompositions for hereditary and union-closed graph classes ℋ. Given a graph of ℋ-treewidth k, we can compute a 5-approximate tree ℋ-decomposition in time f(𝒪(k)) ⋅ n^𝒪(1) whenever ℋ-deletion parameterized by solution size can be solved in time f(k) ⋅ n^𝒪(1) for some function f(k) ≥ 2^k. The current-best algorithms either achieve an approximation factor of k^𝒪(1) or construct optimal decompositions while suffering from non-uniformity with unknown parameter dependence. Using these decompositions, we obtain algorithms solving Odd Cycle Transversal in time 2^𝒪(k) ⋅ n^𝒪(1) parameterized by bipartite-treewidth and Vertex Planarization in time 2^𝒪(k log k) ⋅ n^𝒪(1) parameterized by planar-treewidth, showing that these can be as fast as the solution-size parameterizations and giving the first ETH-tight algorithms for parameterizations by hybrid width measures.

Cite as

Bart M. P. Jansen, Jari J. H. de Kroon, and Michał Włodarczyk. 5-Approximation for ℋ-Treewidth Essentially as Fast as ℋ-Deletion Parameterized by Solution Size. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 66:1-66:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2023.66,
  author =	{Jansen, Bart M. P. and de Kroon, Jari J. H. and W{\l}odarczyk, Micha{\l}},
  title =	{{5-Approximation for ℋ-Treewidth Essentially as Fast as ℋ-Deletion Parameterized by Solution Size}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{66:1--66:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.66},
  URN =		{urn:nbn:de:0030-drops-187195},
  doi =		{10.4230/LIPIcs.ESA.2023.66},
  annote =	{Keywords: fixed-parameter tractability, treewidth, graph decompositions}
}
Document
The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs

Authors: Haim Kaplan, Matthew J. Katz, Rachel Saban, and Micha Sharir


Abstract
We study the reverse shortest path problem on disk graphs in the plane. In this problem we consider the proximity graph of a set of n disks in the plane of arbitrary radii: In this graph two disks are connected if the distance between them is at most some threshold parameter r. The case of intersection graphs is a special case with r = 0. We give an algorithm that, given a target length k, computes the smallest value of r for which there is a path of length at most k between some given pair of disks in the proximity graph. Our algorithm runs in O^*(n^{5/4}) randomized expected time, which improves to O^*(n^{6/5}) for unit disk graphs, where all the disks have the same radius. Our technique is robust and can be applied to many variants of the problem. One significant variant is the case of weighted proximity graphs, where edges are assigned real weights equal to the distance between the disks or between their centers, and k is replaced by a target weight w. In other variants, we want to optimize a parameter different from r, such as a scale factor of the radii of the disks. The main technique for the decision version of the problem (determining whether the graph with a given r has the desired property) is based on efficient implementations of BFS (for the unweighted case) and of Dijkstra’s algorithm (for the weighted case), using efficient data structures for maintaining the bichromatic closest pair for certain bicliques and several distance functions. The optimization problem is then solved by combining the resulting decision procedure with enhanced variants of the interval shrinking and bifurcation technique of [R. Ben Avraham et al., 2015].

Cite as

Haim Kaplan, Matthew J. Katz, Rachel Saban, and Micha Sharir. The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2023.67,
  author =	{Kaplan, Haim and Katz, Matthew J. and Saban, Rachel and Sharir, Micha},
  title =	{{The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.67},
  URN =		{urn:nbn:de:0030-drops-187208},
  doi =		{10.4230/LIPIcs.ESA.2023.67},
  annote =	{Keywords: Computational geometry, geometric optimization, disk graphs, BFS, Dijkstra’s algorithm, reverse shortest path}
}
Document
On Fully Dynamic Strongly Connected Components

Authors: Adam Karczmarz and Marcin Smulewicz


Abstract
We consider maintaining strongly connected components (SCCs) of a directed graph subject to edge insertions and deletions. For this problem, we show a randomized algebraic data structure with conditionally tight O(n^1.529) worst-case update time. The only previously described subquadratic update bound for this problem [Karczmarz, Mukherjee, and Sankowski, STOC'22] holds exclusively in the amortized sense. For the less general dynamic strong connectivity problem, where one is only interested in maintaining whether the graph is strongly connected, we give an efficient deterministic black-box reduction to (arbitrary-pair) dynamic reachability. Consequently, for dynamic strong connectivity we match the best-known O(n^1.407) worst-case upper bound for dynamic reachability [van den Brand, Nanongkai, and Saranurak FOCS'19]. This is also conditionally optimal and improves upon the previous O(n^1.529) bound. Our reduction also yields the first fully dynamic algorithms for maintaining the minimum strong connectivity augmentation of a digraph.

Cite as

Adam Karczmarz and Marcin Smulewicz. On Fully Dynamic Strongly Connected Components. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 68:1-68:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{karczmarz_et_al:LIPIcs.ESA.2023.68,
  author =	{Karczmarz, Adam and Smulewicz, Marcin},
  title =	{{On Fully Dynamic Strongly Connected Components}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{68:1--68:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.68},
  URN =		{urn:nbn:de:0030-drops-187211},
  doi =		{10.4230/LIPIcs.ESA.2023.68},
  annote =	{Keywords: dynamic strongly connected components, dynamic strong connectivity, dynamic reachability}
}
Document
An Improved Approximation Algorithm for the Max-3-Section Problem

Authors: Dor Katzelnick, Aditya Pillai, Roy Schwartz, and Mohit Singh


Abstract
We consider the Max--Section problem, where we are given an undirected graph G=(V,E)equipped with non-negative edge weights w: E → R_+ and the goal is to find a partition of V into three equisized parts while maximizing the total weight of edges crossing between different parts. Max-3-Section is closely related to other well-studied graph partitioning problems, e.g., Max-Cut, Max-3-Cut, and Max-Bisection. We present a polynomial time algorithm achieving an approximation of 0.795, that improves upon the previous best known approximation of 0.673. The requirement of multiple parts that have equal sizes renders Max-3-Section much harder to cope with compared to, e.g., Max-Bisection. We show a new algorithm that combines the existing approach of Lassere hierarchy along with a random cut strategy that suffices to give our result.

Cite as

Dor Katzelnick, Aditya Pillai, Roy Schwartz, and Mohit Singh. An Improved Approximation Algorithm for the Max-3-Section Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 69:1-69:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{katzelnick_et_al:LIPIcs.ESA.2023.69,
  author =	{Katzelnick, Dor and Pillai, Aditya and Schwartz, Roy and Singh, Mohit},
  title =	{{An Improved Approximation Algorithm for the Max-3-Section Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{69:1--69:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.69},
  URN =		{urn:nbn:de:0030-drops-187229},
  doi =		{10.4230/LIPIcs.ESA.2023.69},
  annote =	{Keywords: Approximation Algorithms, Semidefinite Programming, Max-Cut, Max-Bisection}
}
Document
Fitting Tree Metrics with Minimum Disagreements

Authors: Evangelos Kipouridis


Abstract
In the L₀ Fitting Tree Metrics problem, we are given all pairwise distances among the elements of a set V and our output is a tree metric on V. The goal is to minimize the number of pairwise distance disagreements between the input and the output. We provide an O(1) approximation for L₀ Fitting Tree Metrics, which is asymptotically optimal as the problem is APX-Hard. For p ≥ 1, solutions to the related L_p Fitting Tree Metrics have typically used a reduction to L_p Fitting Constrained Ultrametrics. Even though in FOCS '22 Cohen-Addad et al. solved L₀ Fitting (unconstrained) Ultrametrics within a constant approximation factor, their results did not extend to tree metrics. We identify two possible reasons, and provide simple techniques to circumvent them. Our framework does not modify the algorithm from Cohen-Addad et al. It rather extends any ρ approximation for L₀ Fitting Ultrametrics to a 6ρ approximation for L₀ Fitting Tree Metrics in a blackbox fashion.

Cite as

Evangelos Kipouridis. Fitting Tree Metrics with Minimum Disagreements. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 70:1-70:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kipouridis:LIPIcs.ESA.2023.70,
  author =	{Kipouridis, Evangelos},
  title =	{{Fitting Tree Metrics with Minimum Disagreements}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{70:1--70:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.70},
  URN =		{urn:nbn:de:0030-drops-187233},
  doi =		{10.4230/LIPIcs.ESA.2023.70},
  annote =	{Keywords: Hierarchical Clustering, Tree Metrics, Minimum Disagreements}
}
Document
Coloring Tournaments with Few Colors: Algorithms and Complexity

Authors: Felix Klingelhoefer and Alantha Newman


Abstract
A k-coloring of a tournament is a partition of its vertices into k acyclic sets. Deciding if a tournament is 2-colorable is NP-hard. A natural problem, akin to that of coloring a 3-colorable graph with few colors, is to color a 2-colorable tournament with few colors. This problem does not seem to have been addressed before, although it is a special case of coloring a 2-colorable 3-uniform hypergraph with few colors, which is a well-studied problem with super-constant lower bounds. We present an efficient decomposition lemma for tournaments and show that it can be used to design polynomial-time algorithms to color various classes of tournaments with few colors, including an algorithm to color a 2-colorable tournament with ten colors. For the classes of tournaments considered, we complement our upper bounds with strengthened lower bounds, painting a comprehensive picture of the algorithmic and complexity aspects of coloring tournaments.

Cite as

Felix Klingelhoefer and Alantha Newman. Coloring Tournaments with Few Colors: Algorithms and Complexity. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 71:1-71:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{klingelhoefer_et_al:LIPIcs.ESA.2023.71,
  author =	{Klingelhoefer, Felix and Newman, Alantha},
  title =	{{Coloring Tournaments with Few Colors: Algorithms and Complexity}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{71:1--71:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.71},
  URN =		{urn:nbn:de:0030-drops-187247},
  doi =		{10.4230/LIPIcs.ESA.2023.71},
  annote =	{Keywords: Tournaments, Graph Coloring, Algorithms, Complexity}
}
Document
Bellman-Ford Is Optimal for Shortest Hop-Bounded Paths

Authors: Tomasz Kociumaka and Adam Polak


Abstract
This paper is about the problem of finding a shortest s-t path using at most h edges in edge-weighted graphs. The Bellman-Ford algorithm solves this problem in O(hm) time, where m is the number of edges. We show that this running time is optimal, up to subpolynomial factors, under popular fine-grained complexity assumptions. More specifically, we show that under the APSP Hypothesis the problem cannot be solved faster already in undirected graphs with nonnegative edge weights. This lower bound holds even restricted to graphs of arbitrary density and for arbitrary h ∈ O(√m). Moreover, under a stronger assumption, namely the Min-Plus Convolution Hypothesis, we can eliminate the restriction h ∈ O(√m). In other words, the O(hm) bound is tight for the entire space of parameters h, m, and n, where n is the number of nodes. Our lower bounds can be contrasted with the recent near-linear time algorithm for the negative-weight Single-Source Shortest Paths problem, which is the textbook application of the Bellman-Ford algorithm.

Cite as

Tomasz Kociumaka and Adam Polak. Bellman-Ford Is Optimal for Shortest Hop-Bounded Paths. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 72:1-72:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kociumaka_et_al:LIPIcs.ESA.2023.72,
  author =	{Kociumaka, Tomasz and Polak, Adam},
  title =	{{Bellman-Ford Is Optimal for Shortest Hop-Bounded Paths}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{72:1--72:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.72},
  URN =		{urn:nbn:de:0030-drops-187257},
  doi =		{10.4230/LIPIcs.ESA.2023.72},
  annote =	{Keywords: Fine-grained complexity, graph algorithms, lower bounds, shortest paths}
}
Document
Towards Bypassing Lower Bounds for Graph Shortcuts

Authors: Shimon Kogan and Merav Parter


Abstract
For a given (possibly directed) graph G, a hopset (a.k.a. shortcut set) is a (small) set of edges whose addition reduces the graph diameter while preserving desired properties from the given graph G, such as, reachability and shortest-path distances. The key objective is in optimizing the tradeoff between the achieved diameter and the size of the shortcut set (possibly also, the distance distortion). Despite the centrality of these objects and their thorough study over the years, there are still significant gaps between the known upper and lower bound results. A common property shared by almost all known shortcut lower bounds is that they hold for the seemingly simpler task of reducing the diameter of the given graph, D_G, by a constant additive term, in fact, even by just one! We denote such restricted structures by (D_G-1)-diameter hopsets. In this paper we show that this relaxation can be leveraged to narrow the current gaps, and in certain cases to also bypass the known lower bound results, when restricting to sparse graphs (with O(n) edges): - {Hopsets for Directed Weighted Sparse Graphs.} For every n-vertex directed and weighted sparse graph G with D_G ≥ n^{1/4}, one can compute an exact (D_G-1)-diameter hopset of linear size. Combining this with known lower bound results for dense graphs, we get a separation between dense and sparse graphs, hence shortcutting sparse graphs is provably easier. For reachability hopsets, we can provide (D_G-1)-diameter hopsets of linear size, for sparse DAGs, already for D_G ≥ n^{1/5}. This should be compared with the diameter bound of Õ(n^{1/3}) [Kogan and Parter, SODA 2022], and the lower bound of D_G = n^{1/6} by [Huang and Pettie, {SIAM} J. Discret. Math. 2018]. - {Additive Hopsets for Undirected and Unweighted Graphs.} We show a construction of +24 additive (D_G-1)-diameter hopsets with linear number of edges for D_G ≥ n^{1/12} for sparse graphs. This bypasses the current lower bound of D_G = n^{1/6} obtained for exact (D_G-1)-diameter hopset by [HP'18]. For general graphs, the bound becomes D_G ≥ n^{1/6} which matches the lower bound of exact (D_G-1) hopsets implied by [HP'18]. We also provide new additive D-diameter hopsets with linear size, for any given diameter D. Altogether, we show that the current lower bounds can be bypassed by restricting to sparse graphs (with O(n) edges). Moreover, the gaps are narrowed significantly for any graph by allowing for a constant additive stretch.

Cite as

Shimon Kogan and Merav Parter. Towards Bypassing Lower Bounds for Graph Shortcuts. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 73:1-73:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kogan_et_al:LIPIcs.ESA.2023.73,
  author =	{Kogan, Shimon and Parter, Merav},
  title =	{{Towards Bypassing Lower Bounds for Graph Shortcuts}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{73:1--73:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.73},
  URN =		{urn:nbn:de:0030-drops-187264},
  doi =		{10.4230/LIPIcs.ESA.2023.73},
  annote =	{Keywords: Directed Shortcuts, Hopsets, Emulators}
}
Document
Faster Block Tree Construction

Authors: Dominik Köppl, Florian Kurpicz, and Daniel Meyer


Abstract
The block tree [Belazzougui et al. J. Comput. Syst. Sci. '21] is a compressed text index that can answer access (extract a character at a position), rank (number of occurrences of a specified character in a prefix of the text), and select (size of smallest prefix such that a specified character has a specified rank) queries. It requires O(zlog(n/z)) words of space, where z is the number of Lempel-Ziv factors of the text. For some highly repetitive inputs, a block tree can require as little as 0.015 bits per character of the text. Small values of z make the block tree a space-efficient alternative to the wavelet tree, which is another index for these three types of queries. While wavelet trees can be constructed fast in practice, up so far compressed versions of the wavelet tree only leverage statistical compression, meaning that they are blind to spaced repetitions. To make block trees usable in practice, a first step is to find ways in constructing them efficiently. We address this problem by presenting a practically efficient construction algorithm for block trees, which is up to an order of magnitude faster than previous implementations. Additionally, we parallelize our implementation, making it the first block tree construction implementation that works in parallel in shared memory.

Cite as

Dominik Köppl, Florian Kurpicz, and Daniel Meyer. Faster Block Tree Construction. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 74:1-74:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{koppl_et_al:LIPIcs.ESA.2023.74,
  author =	{K\"{o}ppl, Dominik and Kurpicz, Florian and Meyer, Daniel},
  title =	{{Faster Block Tree Construction}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{74:1--74:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.74},
  URN =		{urn:nbn:de:0030-drops-187274},
  doi =		{10.4230/LIPIcs.ESA.2023.74},
  annote =	{Keywords: compressed data structure, block tree, Lempel-Ziv compression, longest previous factor array, rank and select}
}
Document
Connectivity Queries Under Vertex Failures: Not Optimal, but Practical

Authors: Evangelos Kosinas


Abstract
We revisit once more the problem of designing an oracle for answering connectivity queries in undirected graphs in the presence of vertex failures. Specifically, given an undirected graph G with n vertices and m edges and an integer d_⋆ ≪ n, the goal is to preprocess the graph in order to construct a data structure 𝒟 such that, given a set of vertices F with |F| = d ≤ d_⋆, we can derive an oracle from 𝒟 that can efficiently answer queries of the form "is x connected with y in G⧵F?". Very recently, Long and Saranurak (FOCS 2022) provided a solution to this problem that is almost optimal with respect to the preprocessing time, the space usage, the update time, and the query time. However, their solution is highly complicated, and it seems very difficult to be implemented efficiently. Furthermore, it does not settle the complexity of the problem in the regime where d_⋆ is a constant. Here, we provide a much simpler solution to this problem, that uses only textbook data structures. Our algorithm is deterministic, it has preprocessing time and space complexity O(d_⋆ m log n), update time O(d⁴ log n), and query time O(d). These bounds compare very well with the previous best, especially considering the simplicity of our approach. In fact, if we assume that d_⋆ is a constant (d_⋆ ≥ 4), then our algorithm provides some trade-offs that improve the state of the art in some respects. Finally, the data structure that we provide is flexible with respect to d_⋆: it can be adapted to increases and decreases, in time and space that are almost proportional to the change in d_⋆ and the size of the graph.

Cite as

Evangelos Kosinas. Connectivity Queries Under Vertex Failures: Not Optimal, but Practical. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 75:1-75:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kosinas:LIPIcs.ESA.2023.75,
  author =	{Kosinas, Evangelos},
  title =	{{Connectivity Queries Under Vertex Failures: Not Optimal, but Practical}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{75:1--75:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.75},
  URN =		{urn:nbn:de:0030-drops-187289},
  doi =		{10.4230/LIPIcs.ESA.2023.75},
  annote =	{Keywords: Graphs, Connectivity, Fault-Tolerant, Oracles}
}
Document
Improved Approximations for Translational Packing of Convex Polygons

Authors: Adam Kurpisz and Silvan Suter


Abstract
Optimal packing of objects in containers is a critical problem in various real-life and industrial applications. This paper investigates the two-dimensional packing of convex polygons without rotations, where only translations are allowed. We study different settings depending on the type of containers used, including minimizing the number of containers or the size of the container based on an objective function. Building on prior research in the field, we develop polynomial-time algorithms with improved approximation guarantees upon the best-known results by Alt, de Berg and Knauer, as well as Aamand, Abrahamsen, Beretta and Kleist, for problems such as Polygon Area Minimization, Polygon Perimeter Minimization, Polygon Strip Packing, and Polygon Bin Packing. Our approach utilizes a sequence of object transformations that allows sorting by height and orientation, thus enhancing the effectiveness of shelf packing algorithms for polygon packing problems. In addition, we present efficient approximation algorithms for special cases of the Polygon Bin Packing problem, progressing toward solving an open question concerning an 𝒪(1)-approximation algorithm for arbitrary polygons.

Cite as

Adam Kurpisz and Silvan Suter. Improved Approximations for Translational Packing of Convex Polygons. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 76:1-76:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kurpisz_et_al:LIPIcs.ESA.2023.76,
  author =	{Kurpisz, Adam and Suter, Silvan},
  title =	{{Improved Approximations for Translational Packing of Convex Polygons}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{76:1--76:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.76},
  URN =		{urn:nbn:de:0030-drops-187299},
  doi =		{10.4230/LIPIcs.ESA.2023.76},
  annote =	{Keywords: Approximation algorithms, Packing problems, Convex polygons, Bin packing, Strip packing, Area minimization}
}
Document
Structural Parameterizations for Two Bounded Degree Problems Revisited

Authors: Michael Lampis and Manolis Vasilakis


Abstract
We revisit two well-studied problems, Bounded Degree Vertex Deletion and Defective Coloring, where the input is a graph G and a target degree Δ and we are asked either to edit or partition the graph so that the maximum degree becomes bounded by Δ. Both problems are known to be parameterized intractable for the most well-known structural parameters, such as treewidth. We revisit the parameterization by treewidth, as well as several related parameters and present a more fine-grained picture of the complexity of both problems. In particular: - Both problems admit straightforward DP algorithms with table sizes (Δ+2)^tw and (χ_d(Δ+1))^{tw} respectively, where tw is the input graph’s treewidth and χ_d the number of available colors. We show that, under the SETH, both algorithms are essentially optimal, for any non-trivial fixed values of Δ, χ_d, even if we replace treewidth by pathwidth. Along the way, we obtain an algorithm for Defective Coloring with complexity quasi-linear in the table size, thus settling the complexity of both problems for treewidth and pathwidth. - Given that the standard DP algorithm is optimal for treewidth and pathwidth, we then go on to consider the more restricted parameter tree-depth. Here, previously known lower bounds imply that, under the ETH, Bounded Vertex Degree Deletion and Defective Coloring cannot be solved in time n^o(∜{td}) and n^o(√{td}) respectively, leaving some hope that a qualitatively faster algorithm than the one for treewidth may be possible. We close this gap by showing that neither problem can be solved in time n^o(td), under the ETH, by employing a recursive low tree-depth construction that may be of independent interest. - Finally, we consider a structural parameter that is known to be restrictive enough to render both problems FPT: vertex cover. For both problems the best known algorithm in this setting has a super-exponential dependence of the form vc^𝒪(vc). We show that this is optimal, as an algorithm with dependence of the form vc^o(vc) would violate the ETH. Our proof relies on a new application of the technique of d-detecting families introduced by Bonamy et al. [ToCT 2019]. Our results, although mostly negative in nature, paint a clear picture regarding the complexity of both problems in the landscape of parameterized complexity, since in all cases we provide essentially matching upper and lower bounds.

Cite as

Michael Lampis and Manolis Vasilakis. Structural Parameterizations for Two Bounded Degree Problems Revisited. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 77:1-77:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{lampis_et_al:LIPIcs.ESA.2023.77,
  author =	{Lampis, Michael and Vasilakis, Manolis},
  title =	{{Structural Parameterizations for Two Bounded Degree Problems Revisited}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{77:1--77:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.77},
  URN =		{urn:nbn:de:0030-drops-187302},
  doi =		{10.4230/LIPIcs.ESA.2023.77},
  annote =	{Keywords: ETH, Parameterized Complexity, SETH}
}
Document
Massively Parallel Algorithms for the Stochastic Block Model

Authors: Zelin Li, Pan Peng, and Xianbin Zhu


Abstract
Learning the community structure of a large-scale graph is a fundamental problem in machine learning, computer science and statistics. Among others, the Stochastic Block Model (SBM) serves a canonical model for community detection and clustering, and the Massively Parallel Computation (MPC) model is a mathematical abstraction of real-world parallel computing systems, which provides a powerful computational framework for handling large-scale datasets. We study the problem of exactly recovering the communities in a graph generated from the SBM in the MPC model. Specifically, given kn vertices that are partitioned into k equal-sized clusters (i.e., each has size n), a graph on these kn vertices is randomly generated such that each pair of vertices is connected with probability p if they are in the same cluster and with probability q if not, where p > q > 0. We give MPC algorithms for the SBM in the (very general) s-space MPC model, where each machine is guaranteed to have memory s = Ω(log n). Under the condition that (p-q)/√p ≥ Ω̃(k^{1/2} n^{-1/2+1/(2(r-1))}) for any integer r ∈ [3,O(log n)], our first algorithm exactly recovers all the k clusters in O(kr log_s n) rounds using Õ(m) total space, or in O(rlog_s n) rounds using Õ(km) total space. If (p-q)/√p ≥ Ω̃(k^{3/4} n^{-1/4}), our second algorithm achieves O(log_s n) rounds and Õ(m) total space complexity. Both algorithms significantly improve upon a recent result of Cohen-Addad et al. [PODC'22], who gave algorithms that only work in the sublinear space MPC model, where each machine has local memory s = O(n^δ) for some constant δ > 0, with a much stronger condition on p,q,k. Our algorithms are based on collecting the r-step neighborhood of each vertex and comparing the difference of some statistical information generated from the local neighborhoods for each pair of vertices. To implement the clustering algorithms in parallel, we present efficient approaches for implementing some basic graph operations in the s-space MPC model.

Cite as

Zelin Li, Pan Peng, and Xianbin Zhu. Massively Parallel Algorithms for the Stochastic Block Model. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 78:1-78:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2023.78,
  author =	{Li, Zelin and Peng, Pan and Zhu, Xianbin},
  title =	{{Massively Parallel Algorithms for the Stochastic Block Model}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{78:1--78:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.78},
  URN =		{urn:nbn:de:0030-drops-187313},
  doi =		{10.4230/LIPIcs.ESA.2023.78},
  annote =	{Keywords: Massively Parallel Computation, Stochastic Block Model, Graph Algorithms}
}
Document
Connectivity in the Presence of an Opponent

Authors: Zihui Liang, Bakh Khoussainov, Toru Takisaka, and Mingyu Xiao


Abstract
The paper introduces two player connectivity games played on finite bipartite graphs. Algorithms that solve these connectivity games can be used as subroutines for solving Müller games. Müller games constitute a well established class of games in model checking and verification. In connectivity games, the objective of one of the players is to visit every node of the game graph infinitely often. The first contribution of this paper is our proof that solving connectivity games can be reduced to the incremental strongly connected component maintenance (ISCCM) problem, an important problem in graph algorithms and data structures. The second contribution is that we non-trivially adapt two known algorithms for the ISCCM problem to provide two efficient algorithms that solve the connectivity games problem. Finally, based on the techniques developed, we recast Horn’s polynomial time algorithm that solves explicitly given Müller games and provide the first correctness proof of the algorithm. Our algorithms are more efficient than that of Horn’s algorithm. Our solution for connectivity games is used as a subroutine in the algorithm.

Cite as

Zihui Liang, Bakh Khoussainov, Toru Takisaka, and Mingyu Xiao. Connectivity in the Presence of an Opponent. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 79:1-79:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{liang_et_al:LIPIcs.ESA.2023.79,
  author =	{Liang, Zihui and Khoussainov, Bakh and Takisaka, Toru and Xiao, Mingyu},
  title =	{{Connectivity in the Presence of an Opponent}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{79:1--79:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.79},
  URN =		{urn:nbn:de:0030-drops-187324},
  doi =		{10.4230/LIPIcs.ESA.2023.79},
  annote =	{Keywords: Explicit M\"{u}ller games, games played on finite graphs, winning strategies, synthesis and analysis of games}
}
Document
On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching

Authors: Jingxun Liang, Zhihao Gavin Tang, Yixuan Even Xu, Yuhao Zhang, and Renfei Zhou


Abstract
Ranking and Balance are arguably the two most important algorithms in the online matching literature. They achieve the same optimal competitive ratio of 1-1/e for the integral version and fractional version of online bipartite matching by Karp, Vazirani, and Vazirani (STOC 1990) respectively. The two algorithms have been generalized to weighted online bipartite matching problems, including vertex-weighted online bipartite matching and AdWords, by utilizing a perturbation function. The canonical choice of the perturbation function is f(x) = 1-e^{x-1} as it leads to the optimal competitive ratio of 1-1/e in both settings. We advance the understanding of the weighted generalizations of Ranking and Balance in this paper, with a focus on studying the effect of different perturbation functions. First, we prove that the canonical perturbation function is the unique optimal perturbation function for vertex-weighted online bipartite matching. In stark contrast, all perturbation functions achieve the optimal competitive ratio of 1-1/e in the unweighted setting. Second, we prove that the generalization of Ranking to AdWords with unknown budgets using the canonical perturbation function is at most 0.624 competitive, refuting a conjecture of Vazirani (2021). More generally, as an application of the first result, we prove that no perturbation function leads to the prominent competitive ratio of 1-1/e by establishing an upper bound of 1-1/e-0.0003. Finally, we propose the online budget-additive welfare maximization problem that is intermediate between AdWords and AdWords with unknown budgets, and we design an optimal 1-1/e competitive algorithm by generalizing Balance.

Cite as

Jingxun Liang, Zhihao Gavin Tang, Yixuan Even Xu, Yuhao Zhang, and Renfei Zhou. On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 80:1-80:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{liang_et_al:LIPIcs.ESA.2023.80,
  author =	{Liang, Jingxun and Tang, Zhihao Gavin and Xu, Yixuan Even and Zhang, Yuhao and Zhou, Renfei},
  title =	{{On the Perturbation Function of Ranking and Balance for Weighted Online Bipartite Matching}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{80:1--80:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.80},
  URN =		{urn:nbn:de:0030-drops-187334},
  doi =		{10.4230/LIPIcs.ESA.2023.80},
  annote =	{Keywords: Online Matching, AdWords, Ranking, Water-Filling}
}
Document
Tight Bounds for Quantum Phase Estimation and Related Problems

Authors: Nikhil S. Mande and Ronald de Wolf


Abstract
Phase estimation, due to Kitaev [arXiv'95], is one of the most fundamental subroutines in quantum computing, used in Shor’s factoring algorithm, optimization algorithms, quantum chemistry algorithms, and many others. In the basic scenario, one is given black-box access to a unitary U, and an eigenstate |ψ⟩ of U with unknown eigenvalue e^{iθ}, and the task is to estimate the eigenphase θ within ±δ, with high probability. The repeated application of U and U^{-1} is typically the most expensive part of phase estimation, so for us the cost of an algorithm will be that number of applications. Motivated by the "guided Hamiltonian problem" in quantum chemistry, we tightly characterize the cost of several variants of phase estimation where we are no longer given an arbitrary eigenstate, but are required to estimate the maximum eigenphase of U, aided by advice in the form of states (or a unitary preparing those states) which are promised to have at least a certain overlap γ with the top eigenspace. We give algorithms and matching lower bounds (up to logarithmic factors) for all ranges of parameters. We show a crossover point below which advice is not helpful: o(1/γ²) copies of the advice state (or o(1/γ) applications of an advice-preparing unitary) are not significantly better than having no advice at all. We also show that having knowledge of the eigenbasis of U does not significantly reduce cost. Our upper bounds use the subroutine of generalized maximum-finding of van Apeldoorn, Gilyén, Gribling, and de Wolf [Quantum'20], the state-based Hamiltonian simulation of Lloyd, Mohseni, and Rebentrost [Nature Physics'13], and several other techniques. Our lower bounds follow by reductions from a fractional version of the Boolean OR function with advice, which we lower bound by a simple modification of the adversary method of Ambainis [JCSS'02]. As an immediate consequence we also obtain a lower bound on the complexity of the Unitary recurrence time problem, matching an upper bound of She and Yuen [ITCS'23] and resolving an open question posed by them. Lastly, we study how efficiently one can reduce the error probability in the basic phase-estimation scenario. We show that an algorithm solving phase estimation to precision δ with error probability at most ε must have cost Ω(1/δ log(1/ε)), matching the obvious way to error-reduce the basic constant-error-probability phase estimation algorithm. This contrasts with some other scenarios in quantum computing (e.g. search) where error-reduction costs only a factor O(√{log(1/ε)}). Our lower bound technique uses a variant of the polynomial method with trigonometric polynomials.

Cite as

Nikhil S. Mande and Ronald de Wolf. Tight Bounds for Quantum Phase Estimation and Related Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 81:1-81:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mande_et_al:LIPIcs.ESA.2023.81,
  author =	{Mande, Nikhil S. and de Wolf, Ronald},
  title =	{{Tight Bounds for Quantum Phase Estimation and Related Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{81:1--81:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.81},
  URN =		{urn:nbn:de:0030-drops-187346},
  doi =		{10.4230/LIPIcs.ESA.2023.81},
  annote =	{Keywords: Phase estimation, quantum computing}
}
Document
A Fine-Grained Classification of the Complexity of Evaluating the Tutte Polynomial on Integer Points Parameterized by Treewidth and Cutwidth

Authors: Isja Mannens and Jesper Nederlof


Abstract
We give a fine-grained classification of evaluating the Tutte polynomial T(G;x,y) on all integer points on graphs with small treewidth and cutwidth. Specifically, we show for any point (x,y) ∈ ℤ² that either - T(G; x, y) can be computed in polynomial time, - T(G; x, y) can be computed in 2^O(tw) n^O(1) time, but not in 2^o(ctw) n^O(1) time assuming the Exponential Time Hypothesis (ETH), - T(G; x, y) can be computed in 2^O(tw log tw) n^O(1) time, but not in 2^o(ctw log ctw) n^O(1) time assuming the ETH, where we assume tree decompositions of treewidth tw and cutwidth decompositions of cutwidth ctw are given as input along with the input graph on n vertices and point (x,y). To obtain these results, we refine the existing reductions that were instrumental for the seminal dichotomy by Jaeger, Welsh and Vertigan [Math. Proc. Cambridge Philos. Soc'90]. One of our technical contributions is a new rank bound of a matrix that indicates whether the union of two forests is a forest itself, which we use to show that the number of forests of a graph can be counted in 2^O(tw) n^O(1) time.

Cite as

Isja Mannens and Jesper Nederlof. A Fine-Grained Classification of the Complexity of Evaluating the Tutte Polynomial on Integer Points Parameterized by Treewidth and Cutwidth. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 82:1-82:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mannens_et_al:LIPIcs.ESA.2023.82,
  author =	{Mannens, Isja and Nederlof, Jesper},
  title =	{{A Fine-Grained Classification of the Complexity of Evaluating the Tutte Polynomial on Integer Points Parameterized by Treewidth and Cutwidth}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{82:1--82:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.82},
  URN =		{urn:nbn:de:0030-drops-187354},
  doi =		{10.4230/LIPIcs.ESA.2023.82},
  annote =	{Keywords: Width Parameters, Parameterized Complexity, Tutte Polynomial}
}
Document
Matching Statistics Speed up BWT Construction

Authors: Francesco Masillo


Abstract
Due to the exponential growth of genomic data, constructing dedicated data structures has become the principal bottleneck in common bioinformatics applications. In particular, the Burrows-Wheeler Transform (BWT) is the basis of some of the most popular self-indexes for genomic data, due to its known favourable behaviour on repetitive data. Some tools that exploit the intrinsic repetitiveness of biological data have risen in popularity, due to their speed and low space consumption. We introduce a new algorithm for computing the BWT, which takes advantage of the redundancy of the data through a compressed version of matching statistics, the CMS of [Lipták et al., WABI 2022]. We show that it suffices to sort a small subset of suffixes, lowering both computation time and space. Our result is due to a new insight which links the so-called insert-heads of [Lipták et al., WABI 2022] to the well-known run boundaries of the BWT. We give two implementations of our algorithm, called CMS-BWT, both competitive in our experimental validation on highly repetitive real-life datasets. In most cases, they outperform other tools w.r.t. running time, trading off a higher memory footprint, which, however, is still considerably smaller than the total size of the input data.

Cite as

Francesco Masillo. Matching Statistics Speed up BWT Construction. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 83:1-83:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{masillo:LIPIcs.ESA.2023.83,
  author =	{Masillo, Francesco},
  title =	{{Matching Statistics Speed up BWT Construction}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{83:1--83:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.83},
  URN =		{urn:nbn:de:0030-drops-187360},
  doi =		{10.4230/LIPIcs.ESA.2023.83},
  annote =	{Keywords: Burrows-Wheeler Transform, matching statistics, string collections, compressed representation, data structures, efficient algorithms}
}
Document
Approximation Schemes for Min-Sum k-Clustering

Authors: Ismail Naderi, Mohsen Rezapour, and Mohammad R. Salavatipour


Abstract
We consider the Min-Sum k-Clustering (k-MSC) problem. Given a set of points in a metric which is represented by an edge-weighted graph G = (V, E) and a parameter k, the goal is to partition the points V into k clusters such that the sum of distances between all pairs of the points within the same cluster is minimized. The k-MSC problem is known to be APX-hard on general metrics. The best known approximation algorithms for the problem obtained by Behsaz, Friggstad, Salavatipour and Sivakumar [Algorithmica 2019] achieve an approximation ratio of O(log |V|) in polynomial time for general metrics and an approximation ratio 2+ε in quasi-polynomial time for metrics with bounded doubling dimension. No approximation schemes for k-MSC (when k is part of the input) is known for any non-trivial metrics prior to our work. In fact, most of the previous works rely on the simple fact that there is a 2-approximate reduction from k-MSC to the balanced k-median problem and design approximation algorithms for the latter to obtain an approximation for k-MSC. In this paper, we obtain the first Quasi-Polynomial Time Approximation Schemes (QPTAS) for the problem on metrics induced by graphs of bounded treewidth, graphs of bounded highway dimension, graphs of bounded doubling dimensions (including fixed dimensional Euclidean metrics), and planar and minor-free graphs. We bypass the barrier of 2 for k-MSC by introducing a new clustering problem, which we call min-hub clustering, which is a generalization of balanced k-median and is a trade off between center-based clustering problems (such as balanced k-median) and pair-wise clustering (such as Min-Sum k-clustering). We then show how one can find approximation schemes for Min-hub clustering on certain classes of metrics.

Cite as

Ismail Naderi, Mohsen Rezapour, and Mohammad R. Salavatipour. Approximation Schemes for Min-Sum k-Clustering. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 84:1-84:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{naderi_et_al:LIPIcs.ESA.2023.84,
  author =	{Naderi, Ismail and Rezapour, Mohsen and Salavatipour, Mohammad R.},
  title =	{{Approximation Schemes for Min-Sum k-Clustering}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{84:1--84:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.84},
  URN =		{urn:nbn:de:0030-drops-187379},
  doi =		{10.4230/LIPIcs.ESA.2023.84},
  annote =	{Keywords: Approximation Algorithms, Clustering, Dynamic Programming}
}
Document
Algorithms for Computing Maximum Cliques in Hyperbolic Random Graphs

Authors: Eunjin Oh and Seunghyeok Oh


Abstract
In this paper, we study the maximum clique problem on hyperbolic random graphs. A hyperbolic random graph is a mathematical model for analyzing scale-free networks since it effectively explains the power-law degree distribution of scale-free networks. We propose a simple algorithm for finding a maximum clique in hyperbolic random graph. We first analyze the running time of our algorithm theoretically. We can compute a maximum clique on a hyperbolic random graph G in O(m + n^{4.5(1-α)}) expected time if a geometric representation is given or in O(m + n^{6(1-α)}) expected time if a geometric representation is not given, where n and m denote the numbers of vertices and edges of G, respectively, and α denotes a parameter controlling the power-law exponent of the degree distribution of G. Also, we implemented and evaluated our algorithm empirically. Our algorithm outperforms the previous algorithm [BFK18] practically and theoretically. Beyond the hyperbolic random graphs, we have experiment on real-world networks. For most of instances, we get large cliques close to the optimum solutions efficiently.

Cite as

Eunjin Oh and Seunghyeok Oh. Algorithms for Computing Maximum Cliques in Hyperbolic Random Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 85:1-85:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{oh_et_al:LIPIcs.ESA.2023.85,
  author =	{Oh, Eunjin and Oh, Seunghyeok},
  title =	{{Algorithms for Computing Maximum Cliques in Hyperbolic Random Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{85:1--85:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.85},
  URN =		{urn:nbn:de:0030-drops-187384},
  doi =		{10.4230/LIPIcs.ESA.2023.85},
  annote =	{Keywords: Maximum cliques, hyperbolic random graphs}
}
Document
Parameterized Complexity of Equality MinCSP

Authors: George Osipov and Magnus Wahlström


Abstract
We study the parameterized complexity of MinCSP for so-called equality languages, i.e., for finite languages over an infinite domain such as ℕ, where the relations are defined via first-order formulas whose only predicate is =. This is an important class of languages that forms the starting point of all study of infinite-domain CSPs under the commonly used approach pioneered by Bodirsky, i.e., languages defined as reducts of finitely bounded homogeneous structures. Moreover, MinCSP over equality languages forms a natural class of optimisation problems in its own right, covering such problems as Edge Multicut, Steiner Multicut and (under singleton expansion) Edge Multiway Cut. We classify MinCSP(Γ) for every finite equality language Γ, under the natural parameter, as either FPT, W[1]-hard but admitting a constant-factor FPT-approximation, or not admitting a constant-factor FPT-approximation unless FPT=W[2]. In particular, we describe an FPT case that slightly generalises Multicut, and show a constant-factor FPT-approximation for Disjunctive Multicut, the generalisation of Multicut where the "cut requests" come as disjunctions over O(1) individual cut requests s_i ≠ t_i. We also consider singleton expansions of equality languages, enriching an equality language with the capability for assignment constraints (x = i) for either a finite or infinitely many constants i, and fully characterize the complexity of the resulting MinCSP.

Cite as

George Osipov and Magnus Wahlström. Parameterized Complexity of Equality MinCSP. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 86:1-86:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{osipov_et_al:LIPIcs.ESA.2023.86,
  author =	{Osipov, George and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Complexity of Equality MinCSP}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{86:1--86:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.86},
  URN =		{urn:nbn:de:0030-drops-187393},
  doi =		{10.4230/LIPIcs.ESA.2023.86},
  annote =	{Keywords: parameterized complexity, constraint satisfaction, parameterized approximation}
}
Document
Engineering Fast Algorithms for the Bottleneck Matching Problem

Authors: Ioannis Panagiotas, Grégoire Pichon, Somesh Singh, and Bora Uçar


Abstract
We investigate the maximum bottleneck matching problem in bipartite graphs. Given a bipartite graph with nonnegative edge weights, the problem is to find a maximum cardinality matching in which the minimum weight of an edge is the maximum. To the best of our knowledge, there are two widely used solvers for this problem based on two different approaches. There exists a third known approach in the literature, which seems inferior to those two which is presumably why there is no implementation of it. We take this third approach, make theoretical observations to improve its behavior, and implement the improved method. Experiments with the existing two solvers show that their run time can be too high to be useful in many interesting cases. Furthermore, their performance is not predictable, and slight perturbations of the input graph lead to considerable changes in the run time. On the other hand, the proposed solver’s performance is much more stable; it is almost always faster than or comparable to the two existing solvers, and its run time always remains low.

Cite as

Ioannis Panagiotas, Grégoire Pichon, Somesh Singh, and Bora Uçar. Engineering Fast Algorithms for the Bottleneck Matching Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 87:1-87:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{panagiotas_et_al:LIPIcs.ESA.2023.87,
  author =	{Panagiotas, Ioannis and Pichon, Gr\'{e}goire and Singh, Somesh and U\c{c}ar, Bora},
  title =	{{Engineering Fast Algorithms for the Bottleneck Matching Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{87:1--87:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.87},
  URN =		{urn:nbn:de:0030-drops-187406},
  doi =		{10.4230/LIPIcs.ESA.2023.87},
  annote =	{Keywords: bipartite graphs, assignment problem, matching}
}
Document
Subcubic Algorithm for (Unweighted) Unrooted Tree Edit Distance

Authors: Krzysztof Pióro


Abstract
The tree edit distance problem is a natural generalization of the classic string edit distance problem. Given two ordered, edge-labeled trees T₁ and T₂, the edit distance between T₁ and T₂ is defined as the minimum total cost of operations that transform T₁ into T₂. In one operation, we can contract an edge, split a vertex into two or change the label of an edge. For the weighted version of the problem, where the cost of each operation depends on the type of the operation and the label on the edge involved, O(n³) time algorithms are known for both rooted and unrooted trees. The existence of a truly subcubic O(n^{3-ε}) time algorithm is unlikely, as it would imply a truly subcubic algorithm for the APSP problem. However, recently Mao (FOCS'21) showed that if we assume that each operation has a unit cost, then the tree edit distance between two rooted trees can be computed in truly subcubic time. In this paper, we show how to adapt Mao’s algorithm to make it work for unrooted trees and we show an Õ(n^{(7ω + 15)/(2ω + 6)}) ≤ O(n^2.9417) time algorithm for the unweighted tree edit distance between two unrooted trees, where ω ≤ 2.373 is the matrix multiplication exponent. It is the first known subcubic algorithm for unrooted trees. The main idea behind our algorithm is the fact that to compute the tree edit distance between two unrooted trees, it is enough to compute the tree edit distance between an arbitrary rooting of the first tree and every rooting of the second tree.

Cite as

Krzysztof Pióro. Subcubic Algorithm for (Unweighted) Unrooted Tree Edit Distance. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 88:1-88:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{pioro:LIPIcs.ESA.2023.88,
  author =	{Pi\'{o}ro, Krzysztof},
  title =	{{Subcubic Algorithm for (Unweighted) Unrooted Tree Edit Distance}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{88:1--88:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.88},
  URN =		{urn:nbn:de:0030-drops-187415},
  doi =		{10.4230/LIPIcs.ESA.2023.88},
  annote =	{Keywords: tree edit distance, dynamic programming, matrix multiplication}
}
Document
Linear Time Construction of Cover Suffix Tree and Applications

Authors: Jakub Radoszewski


Abstract
The Cover Suffix Tree (CST) of a string T is the suffix tree of T with additional explicit nodes corresponding to halves of square substrings of T. In the CST an explicit node corresponding to a substring C of T is annotated with two numbers: the number of non-overlapping consecutive occurrences of C and the total number of positions in T that are covered by occurrences of C in T. Kociumaka et al. (Algorithmica, 2015) have shown how to compute the CST of a length-n string in 𝒪(n log n) time. We give an algorithm that computes the same data structure in 𝒪(n) time assuming that T is over an integer alphabet and discuss its implications. A string C is a cover of text T if occurrences of C in T cover all positions of T; C is a seed of T if occurrences and overhangs (i.e., prefix-suffix occurrences) of C in T cover all positions of T. An α-partial cover (α-partial seed) of text T is a string C whose occurrences in T (occurrences and overhangs in T, respectively) cover at least α positions of T. Kociumaka et al. (Algorithmica, 2015; Theor. Comput. Sci., 2018) have shown that knowing the CST of a length-n string T, one can compute a linear-sized representation of all seeds of T as well as all shortest α-partial covers and seeds in T for a given α in 𝒪(n) time. Thus our result implies linear-time algorithms computing these notions of quasiperiodicity. The resulting algorithm computing seeds is substantially different from the previous one (Kociumaka et al., SODA 2012, ACM Trans. Algorithms, 2020); in particular, it is non-recursive. Kociumaka et al. (Algorithmica, 2015) proposed an 𝒪(n log n)-time algorithm for computing a shortest α-partial cover for each α = 1,…,n; we improve this complexity to 𝒪(n). Our results are based on a new combinatorial characterization of consecutive overlapping occurrences of a substring S of T in terms of the set of runs (see Kolpakov and Kucherov, FOCS 1999) in T. This new insight also leads to an 𝒪(n)-sized index for reporting overlapping consecutive occurrences of a given pattern P of length m in the optimal 𝒪(m+output) time, where output is the number of occurrences reported. In comparison, a general index for reporting bounded-gap consecutive occurrences of Navarro and Thankachan (Theor. Comput. Sci., 2016) uses 𝒪(n log n) space.

Cite as

Jakub Radoszewski. Linear Time Construction of Cover Suffix Tree and Applications. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{radoszewski:LIPIcs.ESA.2023.89,
  author =	{Radoszewski, Jakub},
  title =	{{Linear Time Construction of Cover Suffix Tree and Applications}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.89},
  URN =		{urn:nbn:de:0030-drops-187428},
  doi =		{10.4230/LIPIcs.ESA.2023.89},
  annote =	{Keywords: cover (quasiperiod), seed, suffix tree, run (maximal repetition)}
}
Document
Simultaneous Representation of Interval Graphs in the Sunflower Case

Authors: Ignaz Rutter and Peter Stumpf


Abstract
A simultaneous representation of (vertex-labeled) graphs G_1,… ,G_k consists of a (geometric) intersection representation R_i for each graph G_i such that each vertex v is represented by the same geometric object in each R_i for which G_i contains v. While Jampani and Lubiw showed that the existence of simultaneous interval representations for k = 2 can be tested efficiently (2010), testing it for graphs where k is part of the input is NP-complete (Bok and Jedličková, 2018). An important special case of simultaneous representations is the sunflower case, where G_i ∩ G_j = (V(G_i)∩ V(G_j),E(G_i)∩ E(G_j)) is the same graph for each i ≠ j. We give an O(∑_{i=1}^k (|V(G_i)|+|E(G_i)|))-time algorithm for deciding the existence of a simultaneous interval representation for the sunflower case, even when k is part of the input. This answers an open question of Jampani and Lubiw.

Cite as

Ignaz Rutter and Peter Stumpf. Simultaneous Representation of Interval Graphs in the Sunflower Case. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 90:1-90:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rutter_et_al:LIPIcs.ESA.2023.90,
  author =	{Rutter, Ignaz and Stumpf, Peter},
  title =	{{Simultaneous Representation of Interval Graphs in the Sunflower Case}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{90:1--90:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.90},
  URN =		{urn:nbn:de:0030-drops-187435},
  doi =		{10.4230/LIPIcs.ESA.2023.90},
  annote =	{Keywords: Interval Graphs, Sunflower Case, Simultaneous Representation, Recognition, Geometric Intersection Graphs}
}
Document
Relaxed Models for Adversarial Streaming: The Bounded Interruptions Model and the Advice Model

Authors: Menachem Sadigurschi, Moshe Shechner, and Uri Stemmer


Abstract
Streaming algorithms are typically analyzed in the oblivious setting, where we assume that the input stream is fixed in advance. Recently, there is a growing interest in designing adversarially robust streaming algorithms that must maintain utility even when the input stream is chosen adaptively and adversarially as the execution progresses. While several fascinating results are known for the adversarial setting, in general, it comes at a very high cost in terms of the required space. Motivated by this, in this work we set out to explore intermediate models that allow us to interpolate between the oblivious and the adversarial models. Specifically, we put forward the following two models: - The bounded interruptions model, in which we assume that the adversary is only partially adaptive. - The advice model, in which the streaming algorithm may occasionally ask for one bit of advice. We present both positive and negative results for each of these two models. In particular, we present generic reductions from each of these models to the oblivious model. This allows us to design robust algorithms with significantly improved space complexity compared to what is known in the plain adversarial model.

Cite as

Menachem Sadigurschi, Moshe Shechner, and Uri Stemmer. Relaxed Models for Adversarial Streaming: The Bounded Interruptions Model and the Advice Model. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 91:1-91:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sadigurschi_et_al:LIPIcs.ESA.2023.91,
  author =	{Sadigurschi, Menachem and Shechner, Moshe and Stemmer, Uri},
  title =	{{Relaxed Models for Adversarial Streaming: The Bounded Interruptions Model and the Advice Model}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{91:1--91:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.91},
  URN =		{urn:nbn:de:0030-drops-187445},
  doi =		{10.4230/LIPIcs.ESA.2023.91},
  annote =	{Keywords: streaming, adversarial streaming}
}
Document
Maximal k-Edge-Connected Subgraphs in Almost-Linear Time for Small k

Authors: Thatchaphol Saranurak and Wuwei Yuan


Abstract
We give the first almost-linear time algorithm for computing the maximal k-edge-connected subgraphs of an undirected unweighted graph for any constant k. More specifically, given an n-vertex m-edge graph G = (V,E) and a number k = log^o(1) n, we can deterministically compute in O(m+n^{1+o(1)}) time the unique vertex partition {V_1,… ,V_z} such that, for every i, V_i induces a k-edge-connected subgraph while every superset V'_i ⊃ V_{i} does not. Previous algorithms with linear time work only when k ≤ 2 [Tarjan SICOMP'72], otherwise they all require Ω(m+n√n) time even when k = 3 [Chechik et al. SODA'17; Forster et al. SODA'20]. Our algorithm also extends to the decremental graph setting; we can deterministically maintain the maximal k-edge-connected subgraphs of a graph undergoing edge deletions in m^{1+o(1)} total update time. Our key idea is a reduction to the dynamic algorithm supporting pairwise k-edge-connectivity queries [Jin and Sun FOCS'20].

Cite as

Thatchaphol Saranurak and Wuwei Yuan. Maximal k-Edge-Connected Subgraphs in Almost-Linear Time for Small k. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 92:1-92:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{saranurak_et_al:LIPIcs.ESA.2023.92,
  author =	{Saranurak, Thatchaphol and Yuan, Wuwei},
  title =	{{Maximal k-Edge-Connected Subgraphs in Almost-Linear Time for Small k}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{92:1--92:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.92},
  URN =		{urn:nbn:de:0030-drops-187451},
  doi =		{10.4230/LIPIcs.ESA.2023.92},
  annote =	{Keywords: Graph algorithms, Maximal k-edge-connected subgraphs, Dynamic k-edge connectivity}
}
Document
Approximating Connected Maximum Cuts via Local Search

Authors: Baruch Schieber and Soroush Vahidi


Abstract
The Connected Max Cut (CMC) problem takes in an undirected graph G(V,E) and finds a subset S ⊆ V such that the induced subgraph G[S] is connected and the number of edges connecting vertices in S to vertices in V⧵S is maximized. This problem is closely related to the Max Leaf Degree (MLD) problem. The input to the MLD problem is an undirected graph G(V,E) and the goal is to find a subtree of G that maximizes the degree (in G) of its leaves. [Gandhi et al. 2018] observed that an α-approximation for the MLD problem induces an 𝒪(α)-approximation for the CMC problem. We present an 𝒪(log log |V|)-approximation algorithm for the MLD problem via local search. This implies an 𝒪(log log |V|)-approximation algorithm for the CMC problem. Thus, improving (exponentially) the best known 𝒪(log |V|) approximation of the Connected Max Cut problem [Hajiaghayi et al. 2015].

Cite as

Baruch Schieber and Soroush Vahidi. Approximating Connected Maximum Cuts via Local Search. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 93:1-93:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{schieber_et_al:LIPIcs.ESA.2023.93,
  author =	{Schieber, Baruch and Vahidi, Soroush},
  title =	{{Approximating Connected Maximum Cuts via Local Search}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{93:1--93:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.93},
  URN =		{urn:nbn:de:0030-drops-187466},
  doi =		{10.4230/LIPIcs.ESA.2023.93},
  annote =	{Keywords: approximation algorithms, graph theory, max-cut, local search}
}
Document
Parameterized Matroid-Constrained Maximum Coverage

Authors: François Sellier


Abstract
In this paper, we introduce the concept of Density-Balanced Subset in a matroid, in which independent sets can be sampled so as to guarantee that (i) each element has the same probability to be sampled, and (ii) those events are negatively correlated. These Density-Balanced Subsets are subsets in the ground set of a matroid in which the traditional notion of uniform random sampling can be extended. We then provide an application of this concept to the Matroid-Constrained Maximum Coverage problem. In this problem, given a matroid ℳ = (V, ℐ) of rank k on a ground set V and a coverage function f on V, the goal is to find an independent set S ∈ ℐ maximizing f(S). This problem is an important special case of the much-studied submodular function maximization problem subject to a matroid constraint; this is also a generalization of the maximum k-cover problem in a graph. In this paper, assuming that the coverage function has a bounded frequency μ (i.e., any element of the underlying universe of the coverage function appears in at most μ sets), we design a procedure, parameterized by some integer ρ, to extract in polynomial time an approximate kernel of size ρ ⋅ k that is guaranteed to contain a 1 - (μ - 1)/ρ approximation of the optimal solution. This procedure can then be used to get a Fixed-Parameter Tractable Approximation Scheme (FPT-AS) providing a 1 - ε approximation in time (μ/ε)^O(k) ⋅ |V|^O(1). This generalizes and improves the results of [Manurangsi, 2019] and [Huang and Sellier, 2022], providing the first FPT-AS working on an arbitrary matroid. Moreover, as the kernel has a very simple characterization, it can be constructed in the streaming setting.

Cite as

François Sellier. Parameterized Matroid-Constrained Maximum Coverage. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 94:1-94:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sellier:LIPIcs.ESA.2023.94,
  author =	{Sellier, Fran\c{c}ois},
  title =	{{Parameterized Matroid-Constrained Maximum Coverage}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{94:1--94:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.94},
  URN =		{urn:nbn:de:0030-drops-187475},
  doi =		{10.4230/LIPIcs.ESA.2023.94},
  annote =	{Keywords: Matroids, approximate kernel, maximum coverage}
}
Document
Fault Tolerance in Euclidean Committee Selection

Authors: Chinmay Sonar, Subhash Suri, and Jie Xue


Abstract
In the committee selection problem, the goal is to choose a subset of size k from a set of candidates C that collectively gives the best representation to a set of voters. We consider this problem in Euclidean d-space where each voter/candidate is a point and voters' preferences are implicitly represented by Euclidean distances to candidates. We explore fault-tolerance in committee selection and study the following three variants: (1) given a committee and a set of f failing candidates, find their optimal replacement; (2) compute the worst-case replacement score for a given committee under failure of f candidates; and (3) design a committee with the best replacement score under worst-case failures. The score of a committee is determined using the well-known (min-max) Chamberlin-Courant rule: minimize the maximum distance between any voter and its closest candidate in the committee. Our main results include the following: (1) in one dimension, all three problems can be solved in polynomial time; (2) in dimension d ≥ 2, all three problems are NP-hard; and (3) all three problems admit a constant-factor approximation in any fixed dimension, and the optimal committee problem has an FPT bicriterion approximation.

Cite as

Chinmay Sonar, Subhash Suri, and Jie Xue. Fault Tolerance in Euclidean Committee Selection. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 95:1-95:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sonar_et_al:LIPIcs.ESA.2023.95,
  author =	{Sonar, Chinmay and Suri, Subhash and Xue, Jie},
  title =	{{Fault Tolerance in Euclidean Committee Selection}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{95:1--95:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.95},
  URN =		{urn:nbn:de:0030-drops-187489},
  doi =		{10.4230/LIPIcs.ESA.2023.95},
  annote =	{Keywords: Multiwinner elections, Fault tolerance, Geometric Hitting Set, EPTAS}
}
Document
Aggregating over Dominated Points by Sorting, Scanning, Zip and Flat Maps

Authors: Jacek Sroka and Jerzy Tyszkiewicz


Abstract
Prefix aggregation operation (also called scan), and its particular case, prefix summation, is an important parallel primitive and enjoys a lot of attention in the research literature. It is also used in many algorithms as one of the steps. Aggregation over dominated points in ℝ^m is a multidimensional generalisation of prefix aggregation. It is also intensively researched, both as a parallel primitive and as a practical problem, encountered in computational geometry, spatial databases and data warehouses. In this paper we show that, for a constant dimension m, aggregation over dominated points in ℝ^m can be computed by O(1) basic operations that include sorting the whole dataset, zipping sorted lists of elements, computing prefix aggregations of lists of elements and flat maps, which expand the data size from initial n to n log^{m-1}n. Thereby we establish that prefix aggregation suffices to express aggregation over dominated points in more dimensions, even though the latter is a far-reaching generalisation of the former. Many problems known to be expressible by aggregation over dominated points become expressible by prefix aggregation, too. We rely on a small set of primitive operations which guarantee an easy transfer to various distributed architectures and some desired properties of the implementation.

Cite as

Jacek Sroka and Jerzy Tyszkiewicz. Aggregating over Dominated Points by Sorting, Scanning, Zip and Flat Maps. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 96:1-96:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sroka_et_al:LIPIcs.ESA.2023.96,
  author =	{Sroka, Jacek and Tyszkiewicz, Jerzy},
  title =	{{Aggregating over Dominated Points by Sorting, Scanning, Zip and Flat Maps}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{96:1--96:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.96},
  URN =		{urn:nbn:de:0030-drops-187497},
  doi =		{10.4230/LIPIcs.ESA.2023.96},
  annote =	{Keywords: Aggregation over dominated points prefix sums sorting flat map range tree parallel algorithms}
}
Document
Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs

Authors: Enze Sun, Zonghan Yang, and Yuhao Zhang


Abstract
We consider the Online Rent Minimization problem, where online jobs with release times, deadlines, and processing times must be scheduled on machines that can be rented for a fixed length period of T. The objective is to minimize the number of machine rents. This problem generalizes the Online Machine Minimization problem where machines can be rented for an infinite period, and both problems have an asymptotically optimal competitive ratio of O(log(p_max/p_min)) for general processing times, where p_max and p_min are the maximum and minimum processing times respectively. However, for small values of p_max/p_min, a better competitive ratio can be achieved by assuming unit-size jobs. Under this assumption, Devanur et al. (2014) gave an optimal e-competitive algorithm for Online Machine Minimization, and Chen and Zhang (2022) gave a (3e+7) ≈ 15.16-competitive algorithm for Online Rent Minimization. In this paper, we significantly improve the competitive ratio of the Online Rent Minimization problem under unit size to 6, by using a clean oracle-based online algorithm framework.

Cite as

Enze Sun, Zonghan Yang, and Yuhao Zhang. Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 97:1-97:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ESA.2023.97,
  author =	{Sun, Enze and Yang, Zonghan and Zhang, Yuhao},
  title =	{{Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{97:1--97:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.97},
  URN =		{urn:nbn:de:0030-drops-187500},
  doi =		{10.4230/LIPIcs.ESA.2023.97},
  annote =	{Keywords: Online Algorithm, Scheduling, Machine Minimization, Rent Minimization}
}
Document
Simple Deterministic Approximation for Submodular Multiple Knapsack Problem

Authors: Xiaoming Sun, Jialin Zhang, and Zhijie Zhang


Abstract
Submodular maximization has been a central topic in theoretical computer science and combinatorial optimization over the last decades. Plenty of well-performed approximation algorithms have been designed for the problem over a variety of constraints. In this paper, we consider the submodular multiple knapsack problem (SMKP). In SMKP, the profits of each subset of elements are specified by a monotone submodular function. The goal is to find a feasible packing of elements over multiple bins (knapsacks) to maximize the profit. Recently, Fairstein et al. [ESA20] proposed a nearly optimal (1-e^{-1}-ε)-approximation algorithm for SMKP. Their algorithm is obtained by combining configuration LP, a grouping technique for bin packing, and the continuous greedy algorithm for submodular maximization. As a result, the algorithm is somewhat sophisticated and inherently randomized. In this paper, we present an arguably simple deterministic combinatorial algorithm for SMKP, which achieves a (1-e^{-1}-ε)-approximation ratio. Our algorithm is based on very different ideas compared with Fairstein et al. [ESA20].

Cite as

Xiaoming Sun, Jialin Zhang, and Zhijie Zhang. Simple Deterministic Approximation for Submodular Multiple Knapsack Problem. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ESA.2023.98,
  author =	{Sun, Xiaoming and Zhang, Jialin and Zhang, Zhijie},
  title =	{{Simple Deterministic Approximation for Submodular Multiple Knapsack Problem}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.98},
  URN =		{urn:nbn:de:0030-drops-187517},
  doi =		{10.4230/LIPIcs.ESA.2023.98},
  annote =	{Keywords: Submodular maximization, knapsack problem, deterministic algorithm}
}
Document
The Tight Spanning Ratio of the Rectangle Delaunay Triangulation

Authors: André van Renssen, Yuan Sha, Yucheng Sun, and Sampson Wong


Abstract
Spanner construction is a well-studied problem and Delaunay triangulations are among the most popular spanners. Tight bounds are known if the Delaunay triangulation is constructed using an equilateral triangle, a square, or a regular hexagon. However, all other shapes have remained elusive. In this paper we extend the restricted class of spanners for which tight bounds are known. We prove that Delaunay triangulations constructed using rectangles with aspect ratio A have spanning ratio at most √2 √{1+A² + A √{A²+1}}, which matches the known lower bound.

Cite as

André van Renssen, Yuan Sha, Yucheng Sun, and Sampson Wong. The Tight Spanning Ratio of the Rectangle Delaunay Triangulation. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 99:1-99:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{vanrenssen_et_al:LIPIcs.ESA.2023.99,
  author =	{van Renssen, Andr\'{e} and Sha, Yuan and Sun, Yucheng and Wong, Sampson},
  title =	{{The Tight Spanning Ratio of the Rectangle Delaunay Triangulation}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{99:1--99:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.99},
  URN =		{urn:nbn:de:0030-drops-187523},
  doi =		{10.4230/LIPIcs.ESA.2023.99},
  annote =	{Keywords: Spanners, Delaunay Triangulation, Spanning Ratio}
}
Document
Canonization of a Random Graph by Two Matrix-Vector Multiplications

Authors: Oleg Verbitsky and Maksim Zhukovskii


Abstract
We show that a canonical labeling of a random n-vertex graph can be obtained by assigning to each vertex x the triple (w₁(x),w₂(x),w₃(x)), where w_k(x) is the number of walks of length k starting from x. This takes time 𝒪(n²), where n² is the input size, by using just two matrix-vector multiplications. The linear-time canonization of a random graph is the classical result of Babai, Erdős, and Selkow. For this purpose they use the well-known combinatorial color refinement procedure, and we make a comparative analysis of the two algorithmic approaches.

Cite as

Oleg Verbitsky and Maksim Zhukovskii. Canonization of a Random Graph by Two Matrix-Vector Multiplications. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 100:1-100:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{verbitsky_et_al:LIPIcs.ESA.2023.100,
  author =	{Verbitsky, Oleg and Zhukovskii, Maksim},
  title =	{{Canonization of a Random Graph by Two Matrix-Vector Multiplications}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{100:1--100:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.100},
  URN =		{urn:nbn:de:0030-drops-187536},
  doi =		{10.4230/LIPIcs.ESA.2023.100},
  annote =	{Keywords: Graph Isomorphism, canonical labeling, random graphs, walk matrix, color refinement, linear time}
}
Document
Improved Algorithms for Distance Selection and Related Problems

Authors: Haitao Wang and Yiming Zhao


Abstract
In this paper, we propose new techniques for solving geometric optimization problems involving interpoint distances of a point set in the plane. Given a set P of n points in the plane and an integer 1 ≤ k ≤ binom(n,2), the distance selection problem is to find the k-th smallest interpoint distance among all pairs of points of P. The previously best deterministic algorithm solves the problem in O(n^{4/3} log² n) time [Katz and Sharir, 1997]. In this paper, we improve their algorithm to O(n^{4/3} log n) time. Using similar techniques, we also give improved algorithms on both the two-sided and the one-sided discrete Fréchet distance with shortcuts problem for two point sets in the plane. For the two-sided problem (resp., one-sided problem), we improve the previous work [Avraham, Filtser, Kaplan, Katz, and Sharir, 2015] by a factor of roughly log²(m+n) (resp., (m+n)^ε), where m and n are the sizes of the two input point sets, respectively. Other problems whose solutions can be improved by our techniques include the reverse shortest path problems for unit-disk graphs. Our techniques are quite general and we believe they will find many other applications in future.

Cite as

Haitao Wang and Yiming Zhao. Improved Algorithms for Distance Selection and Related Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 101:1-101:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ESA.2023.101,
  author =	{Wang, Haitao and Zhao, Yiming},
  title =	{{Improved Algorithms for Distance Selection and Related Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{101:1--101:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.101},
  URN =		{urn:nbn:de:0030-drops-187544},
  doi =		{10.4230/LIPIcs.ESA.2023.101},
  annote =	{Keywords: Geometric optimization, distance selection, Fr\'{e}chet distance, range searching}
}
Document
Maximum Coverage in Random-Arrival Streams

Authors: Rowan Warneke, Farhana Choudhury, and Anthony Wirth


Abstract
Given a collection of m sets, each a subset of a universe {1, …, n}, maximum coverage is the problem of choosing k sets whose union has the largest cardinality. A simple greedy algorithm achieves an approximation factor of 1 - 1 / e ≈ 0.632, which is the best possible polynomial-time approximation unless P = NP. In the streaming setting, information about the input is revealed gradually, in an online fashion. In the set-streaming model, each set is listed contiguously in the stream. In the more general edge-streaming model, the stream is composed of set-element pairs, denoting membership. The overall goal in the streaming setting is to design algorithms that use sublinear space in the size of the input. An interesting line of research is to design algorithms with space complexity polylogarithmic in the size of the input (i.e., polylogarithmic in both n and m); we call such algorithms low-space. In the set-streaming model, it is known that 1/2 is the best possible low-space approximation. In the edge-streaming model, no low-space algorithm can achieve a nontrivial approximation factor. We study the problem under the assumption that the order in which the stream arrives is chosen uniformly at random. Our main results are as follows. - In the random-arrival set-streaming model, we give two new algorithms to show that low space is sufficient to break the 1/2 barrier. The first achieves an approximation factor of 1/2 + c₁ using Õ(k²) space, where c₁ > 0 is a small constant and Õ(⋅) notation suppresses polylogarithmic factors; the second achieves a factor of 1 - 1 / e - ε - o(1) using Õ(k² ε^{-3}) space, where the o(1) term is a function of k. This is essentially the optimal bound, as breaking the 1-1/e barrier is known to require high space. - In the random-arrival edge-streaming model, we show for all fixed α > 0 and δ > 0, any algorithm that α-approximates maximum coverage with probability at least 0.9 in the random-arrival edge-streaming model requires Ω(m^{1-δ}) space (i.e., high space), even for the special case of k = 1.

Cite as

Rowan Warneke, Farhana Choudhury, and Anthony Wirth. Maximum Coverage in Random-Arrival Streams. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 102:1-102:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{warneke_et_al:LIPIcs.ESA.2023.102,
  author =	{Warneke, Rowan and Choudhury, Farhana and Wirth, Anthony},
  title =	{{Maximum Coverage in Random-Arrival Streams}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{102:1--102:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.102},
  URN =		{urn:nbn:de:0030-drops-187559},
  doi =		{10.4230/LIPIcs.ESA.2023.102},
  annote =	{Keywords: Maximum Coverage, Streaming Algorithm, Random Arrival, Greedy Algorithm, Communication Complexity}
}
Document
Efficient Block Approximate Matrix Multiplication

Authors: Chuhan Yang and Christopher Musco


Abstract
Randomized matrix algorithms have had significant recent impact on numerical linear algebra. One especially powerful class of methods are algorithms for approximate matrix multiplication based on sampling. Such methods typically sample individual matrix rows and columns using carefully chosen importance sampling probabilities. However, due to practical considerations like memory locality and the preservation of matrix structure, it is often preferable to sample contiguous blocks of rows and columns all together. Recently, (Wu, 2018) addressed this setting by developing an approximate matrix multiplication method based on block sampling. However, the method is inefficient, as it requires knowledge of optimal importance sampling probabilities that are expensive to compute. We address this issue by showing that the method of Wu can be accelerated through the use of a randomized implicit trace estimation method. Doing so allows us to provably reduce the cost of sampling to near-linear in the size of the matrices being multiplied, without impacting the accuracy of the final approximate matrix multiplication. Overall, this yields a fast practical algorithm, which we test on a number of synthetic and real-world data sets. We complement our algorithmic contribution with the first extensive empirical comparison of block algorithms for randomized matrix multiplication. Our method offers a significant runtime advantage over the method of (Wu, 2018) and also outperforms basic uniform sampling of blocks. However, we find another recent method of (Charalambides, 2021) which uses sub-optimal but efficiently computable sampling probabilities often (but not always) offers the best trade-off between speed and accuracy.

Cite as

Chuhan Yang and Christopher Musco. Efficient Block Approximate Matrix Multiplication. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 103:1-103:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.ESA.2023.103,
  author =	{Yang, Chuhan and Musco, Christopher},
  title =	{{Efficient Block Approximate Matrix Multiplication}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{103:1--103:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.103},
  URN =		{urn:nbn:de:0030-drops-187562},
  doi =		{10.4230/LIPIcs.ESA.2023.103},
  annote =	{Keywords: Approximate matrix multiplication, randomized numerical linear algebra, trace estimation}
}
Document
A Simple Boosting Framework for Transshipment

Authors: Goran Zuzic


Abstract
Transshipment is an important generalization of both the shortest path problem and the optimal transport problem. The task asks to route a demand using a flow of minimum cost over (uncapacitated) edges. Transshipment has recently received extensive attention in theoretical computer science as it is the centerpiece of all modern theoretical breakthroughs in parallel and distributed (approximate) shortest-path computation, a classic and well-studied problem. The key advantage of transshipment over shortest paths is the so-called boosting property: one can often boost a crude approximate solution to a (near-optimal) (1+ε)-approximate solution. However, our understanding of this phenomenon is limited: it is not clear which approximators can be boosted. Moreover, all current boosting frameworks are built with a specific type of approximator in mind and are relatively complicated. The main takeaway of our paper is conceptual: any black-box oracle that computes an approximate dual solution can be boosted to an (1+ε)-approximator. This decouples and simplifies all known near-optimal (1+ε)-approximate transshipment and shortest paths results: they all (implicitly) construct approximate dual solutions and boost them. We provide a very simple analysis based on the multiplicative weights framework. Furthermore, to keep the paper completely self-contained, we provide a new (and arguably much simpler) analysis of multiplicative weights that leverages well-known optimization tools to bypass the ad-hoc calculations used in the standard analyses.

Cite as

Goran Zuzic. A Simple Boosting Framework for Transshipment. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 104:1-104:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{zuzic:LIPIcs.ESA.2023.104,
  author =	{Zuzic, Goran},
  title =	{{A Simple Boosting Framework for Transshipment}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{104:1--104:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.104},
  URN =		{urn:nbn:de:0030-drops-187570},
  doi =		{10.4230/LIPIcs.ESA.2023.104},
  annote =	{Keywords: mixed continuous-discrete optimization, boosting, multiplicative weights, theoretical computer science, shortest path, parallel algorithms}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail