Volume

LIPIcs, Volume 112

26th Annual European Symposium on Algorithms (ESA 2018)



Thumbnail PDF

Event

ESA 2018, August 20-22, 2018, Helsinki, Finland

Editors

Yossi Azar
Hannah Bast
Grzegorz Herman

Publication Details

  • published at: 2018-08-14
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-081-1
  • DBLP: db/conf/esa/esa2018

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 112, ESA'18, Complete Volume

Authors: Yossi Azar, Hannah Bast, and Grzegorz Herman


Abstract
LIPIcs, Volume 112, ESA'18, Complete Volume

Cite as

26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{azar_et_al:LIPIcs.ESA.2018,
  title =	{{LIPIcs, Volume 112, ESA'18, Complete Volume}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018},
  URN =		{urn:nbn:de:0030-drops-97239},
  doi =		{10.4230/LIPIcs.ESA.2018},
  annote =	{Keywords: Computer systems organization, Single instruction, multiple data, Computing methodologies, Graphics processors, Robotic planning, Hardware}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Yossi Azar, Hannah Bast, and Grzegorz Herman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{azar_et_al:LIPIcs.ESA.2018.0,
  author =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.0},
  URN =		{urn:nbn:de:0030-drops-94631},
  doi =		{10.4230/LIPIcs.ESA.2018.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Algorithms for Inverse Optimization Problems

Authors: Sara Ahmadian, Umang Bhaskar, Laura Sanità, and Chaitanya Swamy


Abstract
We study inverse optimization problems, wherein the goal is to map given solutions to an underlying optimization problem to a cost vector for which the given solutions are the (unique) optimal solutions. Inverse optimization problems find diverse applications and have been widely studied. A prominent problem in this field is the inverse shortest path (ISP) problem [D. Burton and Ph.L. Toint, 1992; W. Ben-Ameur and E. Gourdin, 2004; A. Bley, 2007], which finds applications in shortest-path routing protocols used in telecommunications. Here we seek a cost vector that is positive, integral, induces a set of given paths as the unique shortest paths, and has minimum l_infty norm. Despite being extensively studied, very few algorithmic results are known for inverse optimization problems involving integrality constraints on the desired cost vector whose norm has to be minimized. Motivated by ISP, we initiate a systematic study of such integral inverse optimization problems from the perspective of designing polynomial time approximation algorithms. For ISP, our main result is an additive 1-approximation algorithm for multicommodity ISP with node-disjoint commodities, which we show is tight assuming P!=NP. We then consider the integral-cost inverse versions of various other fundamental combinatorial optimization problems, including min-cost flow, max/min-cost bipartite matching, and max/min-cost basis in a matroid, and obtain tight or nearly-tight approximation guarantees for these. Our guarantees for the first two problems are based on results for a broad generalization, namely integral inverse polyhedral optimization, for which we also give approximation guarantees. Our techniques also give similar results for variants, including l_p-norm minimization of the integral cost vector, and distance-minimization from an initial cost vector.

Cite as

Sara Ahmadian, Umang Bhaskar, Laura Sanità, and Chaitanya Swamy. Algorithms for Inverse Optimization Problems. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ahmadian_et_al:LIPIcs.ESA.2018.1,
  author =	{Ahmadian, Sara and Bhaskar, Umang and Sanit\`{a}, Laura and Swamy, Chaitanya},
  title =	{{Algorithms for Inverse Optimization Problems}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{1:1--1:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.1},
  URN =		{urn:nbn:de:0030-drops-94646},
  doi =		{10.4230/LIPIcs.ESA.2018.1},
  annote =	{Keywords: Inverse optimization, Shortest paths, Approximation algorithms, Linear programming, Polyhedral theory, Combinatorial optimization}
}
Document
Two-Dimensional Maximal Repetitions

Authors: Amihood Amir, Gad M. Landau, Shoshana Marcus, and Dina Sokol


Abstract
Maximal repetitions or runs in strings have a wide array of applications and thus have been extensively studied. In this paper, we extend this notion to 2-dimensions, precisely defining a maximal 2D repetition. We provide initial bounds on the number of maximal 2D repetitions that can occur in a matrix. The main contribution of this paper is the presentation of the first algorithm for locating all maximal 2D repetitions in a matrix. The algorithm is efficient and straightforward, with runtime O(n^2 log n log log n+ rho log n), where n^2 is the size of the input, and rho is the number of 2D repetitions in the output.

Cite as

Amihood Amir, Gad M. Landau, Shoshana Marcus, and Dina Sokol. Two-Dimensional Maximal Repetitions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ESA.2018.2,
  author =	{Amir, Amihood and Landau, Gad M. and Marcus, Shoshana and Sokol, Dina},
  title =	{{Two-Dimensional Maximal Repetitions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{2:1--2:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.2},
  URN =		{urn:nbn:de:0030-drops-94652},
  doi =		{10.4230/LIPIcs.ESA.2018.2},
  annote =	{Keywords: pattern matching algorithms, repetitions, periodicity, two-dimensional}
}
Document
Approximate Convex Intersection Detection with Applications to Width and Minkowski Sums

Authors: Sunil Arya, Guilherme D. da Fonseca, and David M. Mount


Abstract
Approximation problems involving a single convex body in R^d have received a great deal of attention in the computational geometry community. In contrast, works involving multiple convex bodies are generally limited to dimensions d <= 3 and/or do not consider approximation. In this paper, we consider approximations to two natural problems involving multiple convex bodies: detecting whether two polytopes intersect and computing their Minkowski sum. Given an approximation parameter epsilon > 0, we show how to independently preprocess two polytopes A,B subset R^d into data structures of size O(1/epsilon^{(d-1)/2}) such that we can answer in polylogarithmic time whether A and B intersect approximately. More generally, we can answer this for the images of A and B under affine transformations. Next, we show how to epsilon-approximate the Minkowski sum of two given polytopes defined as the intersection of n halfspaces in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0. Finally, we present a surprising impact of these results to a well studied problem that considers a single convex body. We show how to epsilon-approximate the width of a set of n points in O(n log(1/epsilon) + 1/epsilon^{(d-1)/2 + alpha}) time, for any constant alpha > 0, a major improvement over the previous bound of roughly O(n + 1/epsilon^{d-1}) time.

Cite as

Sunil Arya, Guilherme D. da Fonseca, and David M. Mount. Approximate Convex Intersection Detection with Applications to Width and Minkowski Sums. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{arya_et_al:LIPIcs.ESA.2018.3,
  author =	{Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.},
  title =	{{Approximate Convex Intersection Detection with Applications to Width and Minkowski Sums}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.3},
  URN =		{urn:nbn:de:0030-drops-94664},
  doi =		{10.4230/LIPIcs.ESA.2018.3},
  annote =	{Keywords: Minkowski sum, convex intersection, width, approximation}
}
Document
On the Worst-Case Complexity of TimSort

Authors: Nicolas Auger, Vincent Jugé, Cyril Nicaud, and Carine Pivoteau


Abstract
TimSort is an intriguing sorting algorithm designed in 2002 for Python, whose worst-case complexity was announced, but not proved until our recent preprint. In fact, there are two slightly different versions of TimSort that are currently implemented in Python and in Java respectively. We propose a pedagogical and insightful proof that the Python version runs in O(n log n). The approach we use in the analysis also applies to the Java version, although not without very involved technical details. As a byproduct of our study, we uncover a bug in the Java implementation that can cause the sorting method to fail during the execution. We also give a proof that Python's TimSort running time is in O(n + n log rho), where rho is the number of runs (i.e. maximal monotonic sequences), which is quite a natural parameter here and part of the explanation for the good behavior of TimSort on partially sorted inputs.

Cite as

Nicolas Auger, Vincent Jugé, Cyril Nicaud, and Carine Pivoteau. On the Worst-Case Complexity of TimSort. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{auger_et_al:LIPIcs.ESA.2018.4,
  author =	{Auger, Nicolas and Jug\'{e}, Vincent and Nicaud, Cyril and Pivoteau, Carine},
  title =	{{On the Worst-Case Complexity of TimSort}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.4},
  URN =		{urn:nbn:de:0030-drops-94678},
  doi =		{10.4230/LIPIcs.ESA.2018.4},
  annote =	{Keywords: Sorting algorithms, Merge sorting algorithms, TimSort, Analysis of algorithms}
}
Document
A New and Improved Algorithm for Online Bin Packing

Authors: János Balogh, József Békési, György Dósa, Leah Epstein, and Asaf Levin


Abstract
We revisit the classic online bin packing problem studied in the half-century. In this problem, items of positive sizes no larger than 1 are presented one by one to be packed into subsets called bins of total sizes no larger than 1, such that every item is assigned to a bin before the next item is presented. We use online partitioning of items into classes based on sizes, as in previous work, but we also apply a new method where items of one class can be packed into more than two types of bins, where a bin type is defined according to the number of such items grouped together. Additionally, we allow the smallest class of items to be packed in multiple kinds of bins, and not only into their own bins. We combine this with the approach of packing of sufficiently big items according to their exact sizes. Finally, we simplify the analysis of such algorithms, allowing the analysis to be based on the most standard weight functions. This simplified analysis allows us to study the algorithm which we defined based on all these ideas. This leads us to the design and analysis of the first algorithm of asymptotic competitive ratio strictly below 1.58, specifically, we break this barrier by providing an algorithm AH (Advanced Harmonic) whose asymptotic competitive ratio does not exceed 1.57829. Our main contribution is the introduction of the simple analysis based on weight function to analyze the state of the art online algorithms for the classic online bin packing problem. The previously used analytic tool named weight system was too complicated for the community in this area to adjust it for other problems and other algorithmic tools that are needed in order to improve the current best algorithms. We show that the weight system based analysis is not needed for the analysis of the current algorithms for the classic online bin packing problem. The importance of a simple analysis is demonstrated by analyzing several new features together with all existing techniques, and by proving a better competitive ratio than the previously best one.

Cite as

János Balogh, József Békési, György Dósa, Leah Epstein, and Asaf Levin. A New and Improved Algorithm for Online Bin Packing. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{balogh_et_al:LIPIcs.ESA.2018.5,
  author =	{Balogh, J\'{a}nos and B\'{e}k\'{e}si, J\'{o}zsef and D\'{o}sa, Gy\"{o}rgy and Epstein, Leah and Levin, Asaf},
  title =	{{A New and Improved Algorithm for Online Bin Packing}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.5},
  URN =		{urn:nbn:de:0030-drops-94686},
  doi =		{10.4230/LIPIcs.ESA.2018.5},
  annote =	{Keywords: Bin packing, online algorithms, competitive analysis}
}
Document
Practical Access to Dynamic Programming on Tree Decompositions

Authors: Max Bannach and Sebastian Berndt


Abstract
Parameterized complexity theory has lead to a wide range of algorithmic breakthroughs within the last decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it,...). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle's celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model-checker for a small fragment of MSO. This fragment is powerful enough to describe many natural problems, and our model-checker turns out to be very competitive against similar state-of-the-art tools.

Cite as

Max Bannach and Sebastian Berndt. Practical Access to Dynamic Programming on Tree Decompositions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bannach_et_al:LIPIcs.ESA.2018.6,
  author =	{Bannach, Max and Berndt, Sebastian},
  title =	{{Practical Access to Dynamic Programming on Tree Decompositions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.6},
  URN =		{urn:nbn:de:0030-drops-94692},
  doi =		{10.4230/LIPIcs.ESA.2018.6},
  annote =	{Keywords: fixed-parameter tractability, treewidth, model-checking}
}
Document
Average Whenever You Meet: Opportunistic Protocols for Community Detection

Authors: Luca Becchetti, Andrea Clementi, Pasin Manurangsi, Emanuele Natale, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan


Abstract
Consider the following asynchronous, opportunistic communication model over a graph G: in each round, one edge is activated uniformly and independently at random and (only) its two endpoints can exchange messages and perform local computations. Under this model, we study the following random process: The first time a vertex is an endpoint of an active edge, it chooses a random number, say +/- 1 with probability 1/2; then, in each round, the two endpoints of the currently active edge update their values to their average. We provide a rigorous analysis of the above process showing that, if G exhibits a two-community structure (for example, two expanders connected by a sparse cut), the values held by the nodes will collectively reflect the underlying community structure over a suitable phase of the above process. Our analysis requires new concentration bounds on the product of certain random matrices that are technically challenging and possibly of independent interest. We then exploit our analysis to design the first opportunistic protocols that approximately recover community structure using only logarithmic (or polylogarithmic, depending on the sparsity of the cut) work per node.

Cite as

Luca Becchetti, Andrea Clementi, Pasin Manurangsi, Emanuele Natale, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan. Average Whenever You Meet: Opportunistic Protocols for Community Detection. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{becchetti_et_al:LIPIcs.ESA.2018.7,
  author =	{Becchetti, Luca and Clementi, Andrea and Manurangsi, Pasin and Natale, Emanuele and Pasquale, Francesco and Raghavendra, Prasad and Trevisan, Luca},
  title =	{{Average Whenever You Meet: Opportunistic Protocols for Community Detection}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{7:1--7:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.7},
  URN =		{urn:nbn:de:0030-drops-94705},
  doi =		{10.4230/LIPIcs.ESA.2018.7},
  annote =	{Keywords: Community Detection, Random Processes, Spectral Analysis}
}
Document
Polynomial-Time Approximation Schemes for k-center, k-median, and Capacitated Vehicle Routing in Bounded Highway Dimension

Authors: Amariah Becker, Philip N. Klein, and David Saulpic


Abstract
The concept of bounded highway dimension was developed to capture observed properties of road networks. We show that a graph of bounded highway dimension with a distinguished root vertex can be embedded into a graph of bounded treewidth in such a way that u-to-v distance is preserved up to an additive error of epsilon times the u-to-root plus v-to-root distances. We show that this embedding yields a PTAS for Bounded-Capacity Vehicle Routing in graphs of bounded highway dimension. In this problem, the input specifies a depot and a set of clients, each with a location and demand; the output is a set of depot-to-depot tours, where each client is visited by some tour and each tour covers at most Q units of client demand. Our PTAS can be extended to handle penalties for unvisited clients. We extend this embedding result to handle a set S of root vertices. This result implies a PTAS for Multiple Depot Bounded-Capacity Vehicle Routing: the tours can go from one depot to another. The embedding result also implies that, for fixed k, there is a PTAS for k-Center in graphs of bounded highway dimension. In this problem, the goal is to minimize d so that there exist k vertices (the centers) such that every vertex is within distance d of some center. Similarly, for fixed k, there is a PTAS for k-Median in graphs of bounded highway dimension. In this problem, the goal is to minimize the sum of distances to the k centers.

Cite as

Amariah Becker, Philip N. Klein, and David Saulpic. Polynomial-Time Approximation Schemes for k-center, k-median, and Capacitated Vehicle Routing in Bounded Highway Dimension. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ESA.2018.8,
  author =	{Becker, Amariah and Klein, Philip N. and Saulpic, David},
  title =	{{Polynomial-Time Approximation Schemes for k-center, k-median, and Capacitated Vehicle Routing in Bounded Highway Dimension}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.8},
  URN =		{urn:nbn:de:0030-drops-94710},
  doi =		{10.4230/LIPIcs.ESA.2018.8},
  annote =	{Keywords: Highway Dimension, Capacitated Vehicle Routing, Graph Embeddings}
}
Document
Fine-grained Lower Bounds on Cops and Robbers

Authors: Sebastian Brandt, Seth Pettie, and Jara Uitto


Abstract
Cops and Robbers is a classic pursuit-evasion game played between a group of g cops and one robber on an undirected N-vertex graph G. We prove that the complexity of deciding the winner in the game under optimal play requires Omega (N^{g-o(1)}) time on instances with O(N log^2 N) edges, conditioned on the Strong Exponential Time Hypothesis. Moreover, the problem of calculating the minimum number of cops needed to win the game is 2^{Omega (sqrt{N})}, conditioned on the weaker Exponential Time Hypothesis. Our conditional lower bound comes very close to a conditional upper bound: if Meyniel's conjecture holds then the cop number can be decided in 2^{O(sqrt{N}log N)} time. In recent years, the Strong Exponential Time Hypothesis has been used to obtain many lower bounds on classic combinatorial problems, such as graph diameter, LCS, EDIT-DISTANCE, and REGEXP matching. To our knowledge, these are the first conditional (S)ETH-hard lower bounds on a strategic game.

Cite as

Sebastian Brandt, Seth Pettie, and Jara Uitto. Fine-grained Lower Bounds on Cops and Robbers. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{brandt_et_al:LIPIcs.ESA.2018.9,
  author =	{Brandt, Sebastian and Pettie, Seth and Uitto, Jara},
  title =	{{Fine-grained Lower Bounds on Cops and Robbers}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.9},
  URN =		{urn:nbn:de:0030-drops-94725},
  doi =		{10.4230/LIPIcs.ESA.2018.9},
  annote =	{Keywords: Cops and Robbers}
}
Document
A Polynomial Kernel for Diamond-Free Editing

Authors: Yixin Cao, Ashutosh Rai, R. B. Sandeep, and Junjie Ye


Abstract
Given a fixed graph H, the H-free editing problem asks whether we can edit at most k edges to make a graph contain no induced copy of H. We obtain a polynomial kernel for this problem when H is a diamond. The incompressibility dichotomy for H being a 3-connected graph and the classical complexity dichotomy suggest that except for H being a complete/empty graph, H-free editing problems admit polynomial kernels only for a few small graphs H. Therefore, we believe that our result is an essential step toward a complete dichotomy on the compressibility of H-free editing. Additionally, we give a cubic-vertex kernel for the diamond-free edge deletion problem, which is far simpler than the previous kernel of the same size for the problem.

Cite as

Yixin Cao, Ashutosh Rai, R. B. Sandeep, and Junjie Ye. A Polynomial Kernel for Diamond-Free Editing. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cao_et_al:LIPIcs.ESA.2018.10,
  author =	{Cao, Yixin and Rai, Ashutosh and Sandeep, R. B. and Ye, Junjie},
  title =	{{A Polynomial Kernel for Diamond-Free Editing}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.10},
  URN =		{urn:nbn:de:0030-drops-94732},
  doi =		{10.4230/LIPIcs.ESA.2018.10},
  annote =	{Keywords: Kernelization, Diamond-free, H-free editing, Graph modification problem}
}
Document
Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades

Authors: Corrie Jacobien Carstens, Michael Hamann, Ulrich Meyer, Manuel Penschuck, Hung Tran, and Dorothea Wagner


Abstract
Graph randomisation is a crucial task in the analysis and synthesis of networks. It is typically implemented as an edge switching process (ESMC) repeatedly swapping the nodes of random edge pairs while maintaining the degrees involved [Mihail and Zegura, 2003]. Curveball is a novel approach that instead considers the whole neighbourhoods of randomly drawn node pairs. Its Markov chain converges to a uniform distribution, and experiments suggest that it requires less steps than the established ESMC [Carstens et al., 2016]. Since trades however are more expensive, we study Curveball's practical runtime by introducing the first efficient Curveball algorithms: the I/O-efficient EM-CB for simple undirected graphs and its internal memory pendant IM-CB. Further, we investigate global trades [Carstens et al., 2016] processing every node in a single super step, and show that undirected global trades converge to a uniform distribution and perform superior in practice. We then discuss EM-GCB and EM-PGCB for global trades and give experimental evidence that EM-PGCB achieves the quality of the state-of-the-art ESMC algorithm EM-ES [M. Hamann et al., 2017] nearly one order of magnitude faster.

Cite as

Corrie Jacobien Carstens, Michael Hamann, Ulrich Meyer, Manuel Penschuck, Hung Tran, and Dorothea Wagner. Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{carstens_et_al:LIPIcs.ESA.2018.11,
  author =	{Carstens, Corrie Jacobien and Hamann, Michael and Meyer, Ulrich and Penschuck, Manuel and Tran, Hung and Wagner, Dorothea},
  title =	{{Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.11},
  URN =		{urn:nbn:de:0030-drops-94745},
  doi =		{10.4230/LIPIcs.ESA.2018.11},
  annote =	{Keywords: Graph randomisation, Curveball, I/O-efficiency, Parallelism}
}
Document
Space-Optimal Quasi-Gray Codes with Logarithmic Read Complexity

Authors: Diptarka Chakraborty, Debarati Das, Michal Koucký, and Nitin Saurabh


Abstract
A quasi-Gray code of dimension n and length l over an alphabet Sigma is a sequence of distinct words w_1,w_2,...,w_l from Sigma^n such that any two consecutive words differ in at most c coordinates, for some fixed constant c>0. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word w_i into its successor w_{i+1}. We present construction of quasi-Gray codes of dimension n and length 3^n over the ternary alphabet {0,1,2} with worst-case read complexity O(log n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension n and length at least 2^n - 20n with worst-case read complexity 6+log n and write complexity 2. This complements a recent result by Raskin [Raskin '17] who shows that any quasi-Gray code over binary alphabet of length 2^n has read complexity Omega(n). Our results significantly improve on previously known constructions and for the odd-size alphabets we break the Omega(n) worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. '14, Ben-Or and Cleve '92, Barrington '89, Coppersmith and Grossman '75].

Cite as

Diptarka Chakraborty, Debarati Das, Michal Koucký, and Nitin Saurabh. Space-Optimal Quasi-Gray Codes with Logarithmic Read Complexity. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ESA.2018.12,
  author =	{Chakraborty, Diptarka and Das, Debarati and Kouck\'{y}, Michal and Saurabh, Nitin},
  title =	{{Space-Optimal Quasi-Gray Codes with Logarithmic Read Complexity}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{12:1--12:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.12},
  URN =		{urn:nbn:de:0030-drops-94750},
  doi =		{10.4230/LIPIcs.ESA.2018.12},
  annote =	{Keywords: Gray code, Space-optimal counter, Decision assignment tree, Cell probe model}
}
Document
A Framework for In-place Graph Algorithms

Authors: Sankardeep Chakraborty, Anish Mukherjee, Venkatesh Raman, and Srinivasa Rao Satti


Abstract
Read-only memory (ROM) model is a classical model of computation to study time-space tradeoffs of algorithms. A classical result on the ROM model is that any algorithm to sort n numbers using O(s) words of extra space requires Omega (n^2/s) comparisons for lg n <= s <= n/lg n and the bound has also been recently matched by an algorithm. However, if we relax the model, we do have sorting algorithms (say Heapsort) that can sort using O(n lg n) comparisons using O(lg n) bits of extra space, even keeping a permutation of the given input sequence at anytime during the algorithm. We address similar relaxations for graph algorithms. We show that a simple natural relaxation of ROM model allows us to implement fundamental graph search methods like BFS and DFS more space efficiently than in ROM. By simply allowing elements in the adjacency list of a vertex to be permuted, we show that, on an undirected or directed connected graph G having n vertices and m edges, the vertices of G can be output in a DFS or BFS order using O(lg n) bits of extra space and O(n^3 lg n) time. Thus we obtain similar bounds for reachability and shortest path distance (both for undirected and directed graphs). With a little more (but still polynomial) time, we can also output vertices in the lex-DFS order. As reachability in directed graphs (even in DAGs) and shortest path distance (even in undirected graphs) are NL-complete, and lex-DFS is P-complete, our results show that our model is more powerful than ROM if L != P. En route, we also introduce and develop algorithms for another relaxation of ROM where the adjacency lists of the vertices are circular lists and we can modify only the heads of the lists. Here we first show a linear time DFS implementation using n + O(lg n) bits of extra space. Improving the extra space exponentially to only O(lg n) bits, we also obtain BFS and DFS albeit with a slightly slower running time. Both the models we propose maintain the graph structure throughout the algorithm, only the order of vertices in the adjacency list changes. In sharp contrast, for BFS and DFS, to the best of our knowledge, there are no algorithms in ROM that use even O(n^{1-epsilon}) bits of extra space; in fact, implementing DFS using cn bits for c<1 has been mentioned as an open problem. Furthermore, DFS (BFS, respectively) algorithms using n+o(n) (o(n), respectively) bits of extra use Reingold's [JACM, 2008] or Barnes et al's reachability algorithm [SICOMP, 1998] and hence have high runtime. Our results can be contrasted with the recent result of Buhrman et al. [STOC, 2014] which gives an algorithm for directed st-reachability on catalytic Turing machines using O(lg n) bits with catalytic space O(n^2 lg n) and time O(n^9).

Cite as

Sankardeep Chakraborty, Anish Mukherjee, Venkatesh Raman, and Srinivasa Rao Satti. A Framework for In-place Graph Algorithms. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ESA.2018.13,
  author =	{Chakraborty, Sankardeep and Mukherjee, Anish and Raman, Venkatesh and Satti, Srinivasa Rao},
  title =	{{A Framework for In-place Graph Algorithms}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.13},
  URN =		{urn:nbn:de:0030-drops-94760},
  doi =		{10.4230/LIPIcs.ESA.2018.13},
  annote =	{Keywords: DFS, BFS, in-place algorithm, space-efficient graph algorithms, logspace}
}
Document
Self-Assembly of Any Shape with Constant Tile Types using High Temperature

Authors: Cameron Chalk, Austin Luchsinger, Robert Schweller, and Tim Wylie


Abstract
Inspired by nature and motivated by a lack of top-down tools for precise nanoscale manufacture, self-assembly is a bottom-up process where simple, unorganized components autonomously combine to form larger more complex structures. Such systems hide rich algorithmic properties - notably, Turing universality - and a self-assembly system can be seen as both the object to be manufactured as well as the machine controlling the manufacturing process. Thus, a benchmark problem in self-assembly is the unique assembly of shapes: to design a set of simple agents which, based on aggregation rules and random movement, self-assemble into a particular shape and nothing else. We use a popular model of self-assembly, the 2-handed or hierarchical tile assembly model, and allow the existence of repulsive forces, which is a well-studied variant. The technique utilizes a finely-tuned temperature (the minimum required affinity required for aggregation of separate complexes). We show that calibrating the temperature and the strength of the aggregation between the tiles, one can encode the shape to be assembled without increasing the number of distinct tile types. Precisely, we show one tile set for which the following holds: for any finite connected shape S, there exists a setting of binding strengths between tiles and a temperature under which the system uniquely assembles S at some scale factor. Our tile system only uses one repulsive glue type and the system is growth-only (it produces no unstable assemblies). The best previous unique shape assembly results in tile assembly models use O(K(S)/(log K(S))) distinct tile types, where K(S) is the Kolmogorov (descriptional) complexity of the shape S.

Cite as

Cameron Chalk, Austin Luchsinger, Robert Schweller, and Tim Wylie. Self-Assembly of Any Shape with Constant Tile Types using High Temperature. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chalk_et_al:LIPIcs.ESA.2018.14,
  author =	{Chalk, Cameron and Luchsinger, Austin and Schweller, Robert and Wylie, Tim},
  title =	{{Self-Assembly of Any Shape with Constant Tile Types using High Temperature}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.14},
  URN =		{urn:nbn:de:0030-drops-94773},
  doi =		{10.4230/LIPIcs.ESA.2018.14},
  annote =	{Keywords: self-assembly, molecular computing, tiling, tile, shapes}
}
Document
A Unified PTAS for Prize Collecting TSP and Steiner Tree Problem in Doubling Metrics

Authors: T-H. Hubert Chan, Haotian Jiang, and Shaofeng H.-C. Jiang


Abstract
We present a unified (randomized) polynomial-time approximation scheme (PTAS) for the prize collecting traveling salesman problem (PCTSP) and the prize collecting Steiner tree problem (PCSTP) in doubling metrics. Given a metric space and a penalty function on a subset of points known as terminals, a solution is a subgraph on points in the metric space, whose cost is the weight of its edges plus the penalty due to terminals not covered by the subgraph. Under our unified framework, the solution subgraph needs to be Eulerian for PCTSP, while it needs to be a tree for PCSTP. Before our work, even a QPTAS for the problems in doubling metrics is not known. Our unified PTAS is based on the previous dynamic programming frameworks proposed in [Talwar STOC 2004] and [Bartal, Gottlieb, Krauthgamer STOC 2012]. However, since it is unknown which part of the optimal cost is due to edge lengths and which part is due to penalties of uncovered terminals, we need to develop new techniques to apply previous divide-and-conquer strategies and sparse instance decompositions.

Cite as

T-H. Hubert Chan, Haotian Jiang, and Shaofeng H.-C. Jiang. A Unified PTAS for Prize Collecting TSP and Steiner Tree Problem in Doubling Metrics. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 15:1-15:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ESA.2018.15,
  author =	{Chan, T-H. Hubert and Jiang, Haotian and Jiang, Shaofeng H.-C.},
  title =	{{A Unified PTAS for Prize Collecting TSP and Steiner Tree Problem in Doubling Metrics}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.15},
  URN =		{urn:nbn:de:0030-drops-94781},
  doi =		{10.4230/LIPIcs.ESA.2018.15},
  annote =	{Keywords: Doubling Dimension, Traveling Salesman Problem, Polynomial Time Approximation Scheme, Steiner Tree Problem, Prize Collecting}
}
Document
Near-Optimal Distance Emulator for Planar Graphs

Authors: Hsien-Chih Chang, Pawel Gawrychowski, Shay Mozes, and Oren Weimann


Abstract
Given a graph G and a set of terminals T, a distance emulator of G is another graph H (not necessarily a subgraph of G) containing T, such that all the pairwise distances in G between vertices of T are preserved in H. An important open question is to find the smallest possible distance emulator. We prove that, given any subset of k terminals in an n-vertex undirected unweighted planar graph, we can construct in O~(n) time a distance emulator of size O~(min(k^2,sqrt{k * n})). This is optimal up to logarithmic factors. The existence of such distance emulator provides a straightforward framework to solve distance-related problems on planar graphs: Replace the input graph with the distance emulator, and apply whatever algorithm available to the resulting emulator. In particular, our result implies that, on any unweighted undirected planar graph, one can compute all-pairs shortest path distances among k terminals in O~(n) time when k=O(n^{1/3}).

Cite as

Hsien-Chih Chang, Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Near-Optimal Distance Emulator for Planar Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:LIPIcs.ESA.2018.16,
  author =	{Chang, Hsien-Chih and Gawrychowski, Pawel and Mozes, Shay and Weimann, Oren},
  title =	{{Near-Optimal Distance Emulator for Planar Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{16:1--16:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.16},
  URN =		{urn:nbn:de:0030-drops-94796},
  doi =		{10.4230/LIPIcs.ESA.2018.16},
  annote =	{Keywords: planar graphs, shortest paths, metric compression, distance preservers, distance emulators, distance oracles}
}
Document
Approximation Schemes for Geometric Coverage Problems

Authors: Steven Chaplick, Minati De, Alexander Ravsky, and Joachim Spoerhase


Abstract
In their seminal work, Mustafa and Ray [Nabil H. Mustafa and Saurabh Ray, 2010] showed that a wide class of geometric set cover (SC) problems admit a PTAS via local search - this is one of the most general approaches known for such problems. Their result applies if a naturally defined "exchange graph" for two feasible solutions is planar and is based on subdividing this graph via a planar separator theorem due to Frederickson [Greg N. Frederickson, 1987]. Obtaining similar results for the related maximum coverage problem (MC) seems non-trivial due to the hard cardinality constraint. In fact, while Badanidiyuru, Kleinberg, and Lee [Ashwinkumar Badanidiyuru et al., 2012] have shown (via a different analysis) that local search yields a PTAS for two-dimensional real halfspaces, they only conjectured that the same holds true for dimension three. Interestingly, at this point it was already known that local search provides a PTAS for the corresponding set cover case and this followed directly from the approach of Mustafa and Ray. In this work we provide a way to address the above-mentioned issue. First, we propose a color-balanced version of the planar separator theorem. The resulting subdivision approximates locally in each part the global distribution of the colors. Second, we show how this roughly balanced subdivision can be employed in a more careful analysis to strictly obey the hard cardinality constraint. More specifically, we obtain a PTAS for any "planarizable" instance of MC and thus essentially for all cases where the corresponding SC instance can be tackled via the approach of Mustafa and Ray. As a corollary, we confirm the conjecture of Badanidiyuru, Kleinberg, and Lee [Ashwinkumar Badanidiyuru et al., 2012] regarding real halfspaces in dimension three. We feel that our ideas could also be helpful in other geometric settings involving a cardinality constraint.

Cite as

Steven Chaplick, Minati De, Alexander Ravsky, and Joachim Spoerhase. Approximation Schemes for Geometric Coverage Problems. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chaplick_et_al:LIPIcs.ESA.2018.17,
  author =	{Chaplick, Steven and De, Minati and Ravsky, Alexander and Spoerhase, Joachim},
  title =	{{Approximation Schemes for Geometric Coverage Problems}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.17},
  URN =		{urn:nbn:de:0030-drops-94809},
  doi =		{10.4230/LIPIcs.ESA.2018.17},
  annote =	{Keywords: balanced separators, maximum coverage, local search, approximation scheme, geometric approximation algorithms}
}
Document
Amortized Analysis of Asynchronous Price Dynamics

Authors: Yun Kuen Cheung and Richard Cole


Abstract
We extend a recently developed framework for analyzing asynchronous coordinate descent algorithms to show that an asynchronous version of tatonnement, a fundamental price dynamic widely studied in general equilibrium theory, converges toward a market equilibrium for Fisher markets with CES utilities or Leontief utilities, for which tatonnement is equivalent to coordinate descent.

Cite as

Yun Kuen Cheung and Richard Cole. Amortized Analysis of Asynchronous Price Dynamics. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cheung_et_al:LIPIcs.ESA.2018.18,
  author =	{Cheung, Yun Kuen and Cole, Richard},
  title =	{{Amortized Analysis of Asynchronous Price Dynamics}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.18},
  URN =		{urn:nbn:de:0030-drops-94812},
  doi =		{10.4230/LIPIcs.ESA.2018.18},
  annote =	{Keywords: Asynchronous Tatonnement, Fisher Market, Market Equilibrium, Amortized Analysis}
}
Document
Cycles to the Rescue! Novel Constraints to Compute Maximum Planar Subgraphs Fast

Authors: Markus Chimani and Tilo Wiedera


Abstract
The NP-hard Maximum Planar Subgraph problem asks for a planar subgraph H of a given graph G such that H has maximum edge cardinality. For more than two decades, the only known non-trivial exact algorithm was based on integer linear programming and Kuratowski's famous planarity criterion. We build upon this approach and present new constraint classes - together with a lifting of the polyhedron - to obtain provably stronger LP-relaxations, and in turn faster algorithms in practice. The new constraints take Euler's polyhedron formula as a starting point and combine it with considering cycles in G. This paper discusses both the theoretical as well as the practical sides of this strengthening.

Cite as

Markus Chimani and Tilo Wiedera. Cycles to the Rescue! Novel Constraints to Compute Maximum Planar Subgraphs Fast. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chimani_et_al:LIPIcs.ESA.2018.19,
  author =	{Chimani, Markus and Wiedera, Tilo},
  title =	{{Cycles to the Rescue! Novel Constraints to Compute Maximum Planar Subgraphs Fast}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.19},
  URN =		{urn:nbn:de:0030-drops-94829},
  doi =		{10.4230/LIPIcs.ESA.2018.19},
  annote =	{Keywords: algorithm engineering, graph algorithms, integer linear programming, maximum planar subgraph}
}
Document
Parameterized Approximation Algorithms for Bidirected Steiner Network Problems

Authors: Rajesh Chitnis, Andreas Emil Feldmann, and Pasin Manurangsi


Abstract
The Directed Steiner Network (DSN) problem takes as input a directed edge-weighted graph G=(V,E) and a set {D}subseteq V x V of k demand pairs. The aim is to compute the cheapest network N subseteq G for which there is an s -> t path for each (s,t)in {D}. It is known that this problem is notoriously hard as there is no k^{1/4-o(1)}-approximation algorithm under Gap-ETH, even when parameterizing the runtime by k [Dinur & Manurangsi, ITCS 2018]. In light of this, we systematically study several special cases of DSN and determine their parameterized approximability for the parameter k. For the bi-DSN_Planar problem, the aim is to compute a planar optimum solution N subseteq G in a bidirected graph G, i.e. for every edge uv of G the reverse edge vu exists and has the same weight. This problem is a generalization of several well-studied special cases. Our main result is that this problem admits a parameterized approximation scheme (PAS) for k. We also prove that our result is tight in the sense that (a) the runtime of our PAS cannot be significantly improved, and (b) it is unlikely that a PAS exists for any generalization of bi-DSN_Planar, unless FPT=W[1]. Additionally we study several generalizations of bi-DSN_Planar and obtain upper and lower bounds on obtainable runtimes parameterized by k. One important special case of DSN is the Strongly Connected Steiner Subgraph (SCSS) problem, for which the solution network N subseteq G needs to strongly connect a given set of k terminals. It has been observed before that for SCSS a parameterized 2-approximation exists when parameterized by k [Chitnis et al., IPEC 2013]. We show a tight inapproximability result: under Gap-ETH there is no (2-{epsilon})-approximation algorithm parameterized by k (for any epsilon>0). To the best of our knowledge, this is the first example of a W[1]-hard problem admitting a non-trivial parameterized approximation factor which is also known to be tight! Additionally we show that when restricting the input of SCSS to bidirected graphs, the problem remains NP-hard but becomes FPT for k.

Cite as

Rajesh Chitnis, Andreas Emil Feldmann, and Pasin Manurangsi. Parameterized Approximation Algorithms for Bidirected Steiner Network Problems. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 20:1-20:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chitnis_et_al:LIPIcs.ESA.2018.20,
  author =	{Chitnis, Rajesh and Feldmann, Andreas Emil and Manurangsi, Pasin},
  title =	{{Parameterized Approximation Algorithms for Bidirected Steiner Network Problems}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{20:1--20:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.20},
  URN =		{urn:nbn:de:0030-drops-94833},
  doi =		{10.4230/LIPIcs.ESA.2018.20},
  annote =	{Keywords: Directed Steiner Network, Strongly Connected Steiner Subgraph, Parameterized Approximations, Bidirected Graphs, Planar Graphs}
}
Document
Online Facility Location with Deletions

Authors: Marek Cygan, Artur Czumaj, Marcin Mucha, and Piotr Sankowski


Abstract
In this paper we study three previously unstudied variants of the online Facility Location problem, considering an intrinsic scenario when the clients and facilities are not only allowed to arrive to the system, but they can also depart at any moment. We begin with the study of a natural fully-dynamic online uncapacitated model where clients can be both added and removed. When a client arrives, then it has to be assigned either to an existing facility or to a new facility opened at the client's location. However, when a client who has been also one of the open facilities is to be removed, then our model has to allow to reconnect all clients that have been connected to that removed facility. In this model, we present an optimal O(log(n_{act}) / log log(n_{act}))-competitive algorithm, where n_{act} is the number of active clients at the end of the input sequence. Next, we turn our attention to the capacitated Facility Location problem. We first note that if no deletions are allowed, then one can achieve an optimal competitive ratio of O(log(n) / log(log n)), where n is the length of the sequence. However, when deletions are allowed, the capacitated version of the problem is significantly more challenging than the uncapacitated one. We show that still, using a more sophisticated algorithmic approach, one can obtain an online O(log N + log c log n)-competitive algorithm for the capacitated Facility Location problem in the fully dynamic model, where N is number of points in the input metric and c is the capacity of any open facility.

Cite as

Marek Cygan, Artur Czumaj, Marcin Mucha, and Piotr Sankowski. Online Facility Location with Deletions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{cygan_et_al:LIPIcs.ESA.2018.21,
  author =	{Cygan, Marek and Czumaj, Artur and Mucha, Marcin and Sankowski, Piotr},
  title =	{{Online Facility Location with Deletions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.21},
  URN =		{urn:nbn:de:0030-drops-94843},
  doi =		{10.4230/LIPIcs.ESA.2018.21},
  annote =	{Keywords: online algorithms, facility location, fully-dynamic online algorithms}
}
Document
Improved Routing on the Delaunay Triangulation

Authors: Nicolas Bonichon, Prosenjit Bose, Jean-Lou De Carufel, Vincent Despré, Darryl Hill, and Michiel Smid


Abstract
A geometric graph G=(P,E) is a set of points in the plane and edges between pairs of points, where the weight of an edge is equal to the Euclidean distance between its two endpoints. In local routing we find a path through G from a source vertex s to a destination vertex t, using only knowledge of the current vertex, its incident edges, and the locations of s and t. We present an algorithm for local routing on the Delaunay triangulation, and show that it finds a path between a source vertex s and a target vertex t that is not longer than 3.56|st|, improving the previous bound of 5.9|st|.

Cite as

Nicolas Bonichon, Prosenjit Bose, Jean-Lou De Carufel, Vincent Despré, Darryl Hill, and Michiel Smid. Improved Routing on the Delaunay Triangulation. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 22:1-22:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bonichon_et_al:LIPIcs.ESA.2018.22,
  author =	{Bonichon, Nicolas and Bose, Prosenjit and De Carufel, Jean-Lou and Despr\'{e}, Vincent and Hill, Darryl and Smid, Michiel},
  title =	{{Improved Routing on the Delaunay Triangulation}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{22:1--22:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.22},
  URN =		{urn:nbn:de:0030-drops-94857},
  doi =		{10.4230/LIPIcs.ESA.2018.22},
  annote =	{Keywords: Delaunay, local routing, geometric, graph}
}
Document
On Geometric Prototype and Applications

Authors: Hu Ding and Manni Liu


Abstract
In this paper, we propose to study a new geometric optimization problem called the "geometric prototype" in Euclidean space. Given a set of patterns, where each pattern is represented by a (weighted or unweighted) point set, the geometric prototype can be viewed as the "average pattern" minimizing the total matching cost to them. As a general model, the problem finds many applications in real-world, such as Wasserstein barycenter and ensemble clustering. The dimensionality could be either constant or high, depending on the applications. To our best knowledge, the general geometric prototype problem has yet to be seriously considered by the theory community. To bridge the gap between theory and practice, we first show that a small core-set can be obtained to substantially reduce the data size. Consequently, any existing heuristic or algorithm can run on the core-set to achieve a great improvement on the efficiency. As a new application of core-set, it needs to tackle a couple of challenges particularly in theory. Finally, we test our method on both image and high dimensional clustering datasets; the experimental results remain stable even if we run the algorithms on core-sets much smaller than the original datasets, while the running times are reduced significantly.

Cite as

Hu Ding and Manni Liu. On Geometric Prototype and Applications. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ESA.2018.23,
  author =	{Ding, Hu and Liu, Manni},
  title =	{{On Geometric Prototype and Applications}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.23},
  URN =		{urn:nbn:de:0030-drops-94867},
  doi =		{10.4230/LIPIcs.ESA.2018.23},
  annote =	{Keywords: prototype, core-set, Wasserstein barycenter, ensemble clustering}
}
Document
Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees

Authors: Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick


Abstract
We revisit multipass pairing heaps and path-balanced binary search trees (BSTs), two classical algorithms for data structure maintenance. The pairing heap is a simple and efficient "self-adjusting" heap, introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. In the multipass variant (one of the original pairing heap variants described by Fredman et al.) the minimum item is extracted via repeated pairing rounds in which neighboring siblings are linked. Path-balanced BSTs, proposed by Sleator (cf. Subramanian, 1996), are a natural alternative to Splay trees (Sleator and Tarjan, 1983). In a path-balanced BST, whenever an item is accessed, the search path leading to that item is re-arranged into a balanced tree. Despite their simplicity, both algorithms turned out to be difficult to analyse. Fredman et al. showed that operations in multipass pairing heaps take amortized O(log n * log log n / log log log n) time. For searching in path-balanced BSTs, Balasubramanian and Raman showed in 1995 the same amortized time bound of O(log n * log log n / log log log n), using a different argument. In this paper we show an explicit connection between the two algorithms and improve both bounds to O(log n * 2^{log^* n} * log^* n), respectively O(log n * 2^{log^* n} * (log^* n)^2), where log^* denotes the slowly growing iterated logarithm function. These are the first improvements in more than three, resp. two decades, approaching the information-theoretic lower bound of Omega(log n).

Cite as

Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick. Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ESA.2018.24,
  author =	{Dorfman, Dani and Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Pettie, Seth and Zwick, Uri},
  title =	{{Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.24},
  URN =		{urn:nbn:de:0030-drops-94879},
  doi =		{10.4230/LIPIcs.ESA.2018.24},
  annote =	{Keywords: data structure, priority queue, pairing heap, binary search tree}
}
Document
Improved Time and Space Bounds for Dynamic Range Mode

Authors: Hicham El-Zein, Meng He, J. Ian Munro, and Bryce Sandlund


Abstract
Given an array A of n elements, we wish to support queries for the most frequent and least frequent element in a subrange [l, r] of A. We also wish to support updates that change a particular element at index i or insert/ delete an element at index i. For the range mode problem, our data structure supports all operations in O(n^{2/3}) deterministic time using only O(n) space. This improves two results by Chan et al. [Timothy M. Chan et al., 2014]: a linear space data structure supporting update and query operations in O~(n^{3/4}) time and an O(n^{4/3}) space data structure supporting update and query operations in O~(n^{2/3}) time. For the range least frequent problem, we address two variations. In the first, we are allowed to answer with an element of A that may not appear in the query range, and in the second, the returned element must be present in the query range. For the first variation, we develop a data structure that supports queries in O~(n^{2/3}) time, updates in O(n^{2/3}) time, and occupies O(n) space. For the second variation, we develop a Monte Carlo data structure that supports queries in O(n^{2/3}) time, updates in O~(n^{2/3}) time, and occupies O~(n) space, but requires that updates are made independently of the results of previous queries. The Monte Carlo data structure is also capable of answering k-frequency queries; that is, the problem of finding an element of given frequency in the specified query range. Previously, no dynamic data structures were known for least frequent element or k-frequency queries.

Cite as

Hicham El-Zein, Meng He, J. Ian Munro, and Bryce Sandlund. Improved Time and Space Bounds for Dynamic Range Mode. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 25:1-25:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{elzein_et_al:LIPIcs.ESA.2018.25,
  author =	{El-Zein, Hicham and He, Meng and Munro, J. Ian and Sandlund, Bryce},
  title =	{{Improved Time and Space Bounds for Dynamic Range Mode}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.25},
  URN =		{urn:nbn:de:0030-drops-94886},
  doi =		{10.4230/LIPIcs.ESA.2018.25},
  annote =	{Keywords: dynamic data structures, range query, range mode, range least frequent, range k-frequency}
}
Document
Online Makespan Scheduling with Job Migration on Uniform Machines

Authors: Matthias Englert, David Mezlaf, and Matthias Westermann


Abstract
In the classic minimum makespan scheduling problem, we are given an input sequence of n jobs with sizes. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we allow the online algorithm to reassign up to k jobs to different machines in the final assignment. For m identical machines, Albers and Hellwig (Algorithmica, 2017) give tight bounds on the competitive ratio in this model. The precise ratio depends on, and increases with, m. It lies between 4/3 and ~~ 1.4659. They show that k = O(m) is sufficient to achieve this bound and no k = o(n) can result in a better bound. We study m uniform machines, i.e., machines with different speeds, and show that this setting is strictly harder. For sufficiently large m, there is a delta = Theta(1) such that, for m machines with only two different machine speeds, no online algorithm can achieve a competitive ratio of less than 1.4659 + delta with k = o(n). We present a new algorithm for the uniform machine setting. Depending on the speeds of the machines, our scheduling algorithm achieves a competitive ratio that lies between 4/3 and ~~ 1.7992 with k = O(m). We also show that k = Omega(m) is necessary to achieve a competitive ratio below 2. Our algorithm is based on a subtle imbalance with respect to the completion times of the machines, complemented by a bicriteria approximation algorithm that minimizes the makespan and maximizes the average completion time for certain sets of machines.

Cite as

Matthias Englert, David Mezlaf, and Matthias Westermann. Online Makespan Scheduling with Job Migration on Uniform Machines. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{englert_et_al:LIPIcs.ESA.2018.26,
  author =	{Englert, Matthias and Mezlaf, David and Westermann, Matthias},
  title =	{{Online Makespan Scheduling with Job Migration on Uniform Machines}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.26},
  URN =		{urn:nbn:de:0030-drops-94890},
  doi =		{10.4230/LIPIcs.ESA.2018.26},
  annote =	{Keywords: online algorithms, competitive analysis, minimum makespan scheduling, job migration}
}
Document
Truthful Prompt Scheduling for Minimizing Sum of Completion Times

Authors: Alon Eden, Michal Feldman, Amos Fiat, and Tzahi Taub


Abstract
We give a prompt online mechanism for minimizing the sum of [weighted] completion times. This is the first prompt online algorithm for the problem. When such jobs are strategic agents, delaying scheduling decisions makes little sense. Moreover, the mechanism has a particularly simple form of an anonymous menu of options.

Cite as

Alon Eden, Michal Feldman, Amos Fiat, and Tzahi Taub. Truthful Prompt Scheduling for Minimizing Sum of Completion Times. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{eden_et_al:LIPIcs.ESA.2018.27,
  author =	{Eden, Alon and Feldman, Michal and Fiat, Amos and Taub, Tzahi},
  title =	{{Truthful Prompt Scheduling for Minimizing Sum of Completion Times}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.27},
  URN =		{urn:nbn:de:0030-drops-94905},
  doi =		{10.4230/LIPIcs.ESA.2018.27},
  annote =	{Keywords: Scheduling, Mechanism design, Online algorithms}
}
Document
Weighted Model Counting on the GPU by Exploiting Small Treewidth

Authors: Johannes K. Fichte, Markus Hecher, Stefan Woltran, and Markus Zisser


Abstract
We propose a novel solver that efficiently finds almost the exact number of solutions of a Boolean formula (#Sat) and the weighted model count of a weighted Boolean formula (WMC) if the treewidth of the given formula is sufficiently small. The basis of our approach are dynamic programming algorithms on tree decompositions, which we engineered towards efficient parallel execution on the GPU. We provide thorough experiments and compare the runtime of our system with state-of-the-art #Sat and WMC solvers. Our results are encouraging in the sense that also complex reasoning problems can be tackled by parameterized algorithms executed on the GPU if instances have treewidth at most 30, which is the case for more than half of counting and weighted counting benchmark instances.

Cite as

Johannes K. Fichte, Markus Hecher, Stefan Woltran, and Markus Zisser. Weighted Model Counting on the GPU by Exploiting Small Treewidth. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fichte_et_al:LIPIcs.ESA.2018.28,
  author =	{Fichte, Johannes K. and Hecher, Markus and Woltran, Stefan and Zisser, Markus},
  title =	{{Weighted Model Counting on the GPU by Exploiting Small Treewidth}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.28},
  URN =		{urn:nbn:de:0030-drops-94915},
  doi =		{10.4230/LIPIcs.ESA.2018.28},
  annote =	{Keywords: Parameterized Algorithms, Weighted Model Counting, General Purpose Computing on Graphics Processing Units, Dynamic Programming, Tree Decompositions, Treewidth}
}
Document
Light Spanners for High Dimensional Norms via Stochastic Decompositions

Authors: Arnold Filtser and Ofer Neiman


Abstract
Spanners for low dimensional spaces (e.g. Euclidean space of constant dimension, or doubling metrics) are well understood. This lies in contrast to the situation in high dimensional spaces, where except for the work of Har-Peled, Indyk and Sidiropoulos (SODA 2013), who showed that any n-point Euclidean metric has an O(t)-spanner with O~(n^{1+1/t^2}) edges, little is known. In this paper we study several aspects of spanners in high dimensional normed spaces. First, we build spanners for finite subsets of l_p with 1<p <=2. Second, our construction yields a spanner which is both sparse and also light, i.e., its total weight is not much larger than that of the minimum spanning tree. In particular, we show that any n-point subset of l_p for 1<p <=2 has an O(t)-spanner with n^{1+O~(1/t^p)} edges and lightness n^{O~(1/t^p)}. In fact, our results are more general, and they apply to any metric space admitting a certain low diameter stochastic decomposition. It is known that arbitrary metric spaces have an O(t)-spanner with lightness O(n^{1/t}). We exhibit the following tradeoff: metrics with decomposability parameter nu=nu(t) admit an O(t)-spanner with lightness O~(nu^{1/t}). For example, n-point Euclidean metrics have nu <=n^{1/t}, metrics with doubling constant lambda have nu <=lambda, and graphs of genus g have nu <=g. While these families do admit a (1+epsilon)-spanner, its lightness depend exponentially on the dimension (resp. log g). Our construction alleviates this exponential dependency, at the cost of incurring larger stretch.

Cite as

Arnold Filtser and Ofer Neiman. Light Spanners for High Dimensional Norms via Stochastic Decompositions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{filtser_et_al:LIPIcs.ESA.2018.29,
  author =	{Filtser, Arnold and Neiman, Ofer},
  title =	{{Light Spanners for High Dimensional Norms via Stochastic Decompositions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.29},
  URN =		{urn:nbn:de:0030-drops-94922},
  doi =		{10.4230/LIPIcs.ESA.2018.29},
  annote =	{Keywords: Spanners, Stochastic Decompositions, High Dimensional Euclidean Space, Doubling Dimension, Genus Graphs}
}
Document
On the Tractability of Optimization Problems on H-Graphs

Authors: Fedor V. Fomin, Petr A. Golovach, and Jean-Florent Raymond


Abstract
For a graph H, a graph G is an H-graph if it is an intersection graph of connected subgraphs of some subdivision of H. These graphs naturally generalize several important graph classes like interval graphs or circular-arc graph. This notion was introduced in the early 1990s by Biro, Hujter, and Tuza. Recently, Chaplick et al. initiated the algorithmic study of H-graphs by showing that a number of fundamental optimization problems like Clique, Independent Set, or Dominating Set are solvable in polynomial time on H-graphs. We extend and complement these algorithmic findings in several directions. First we show that for every fixed H, the class of H-graphs is of logarithmically-bounded boolean-width. We also prove that H-graphs are graphs with polynomially many minimal separators. Pipelined with the plethora of known algorithms on graphs of bounded boolean-width and graphs with polynomially many minimal separators, this describes a large class of optimization problems that are solvable in polynomial time on H-graphs. The most fundamental optimization problems among those solvable in polynomial time on H-graphs are Clique, Independent Set, and Dominating Set. We provide a more refined complexity analysis of these problems from the perspective of parameterized complexity. We show that Independent Set and Dominating Set are W[1]-hard being parameterized by the size of H plus the size of the solution. On the other hand, we prove that when H is a tree, Dominating Set is fixed-parameter tractable (FPT) parameterized by the size of H. Besides, we show that Clique admits a polynomial kernel parameterized by H and the solution size.

Cite as

Fedor V. Fomin, Petr A. Golovach, and Jean-Florent Raymond. On the Tractability of Optimization Problems on H-Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2018.30,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Raymond, Jean-Florent},
  title =	{{On the Tractability of Optimization Problems on H-Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.30},
  URN =		{urn:nbn:de:0030-drops-94930},
  doi =		{10.4230/LIPIcs.ESA.2018.30},
  annote =	{Keywords: H-topological intersection graphs, parameterized complexity, minimal separators, boolean-width, mim-width}
}
Document
On the Optimality of Pseudo-polynomial Algorithms for Integer Programming

Authors: Fedor V. Fomin, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh


Abstract
In the classic Integer Programming (IP) problem, the objective is to decide whether, for a given m x n matrix A and an m-vector b=(b_1,..., b_m), there is a non-negative integer n-vector x such that Ax=b. Solving (IP) is an important step in numerous algorithms and it is important to obtain an understanding of the precise complexity of this problem as a function of natural parameters of the input. The classic pseudo-polynomial time algorithm of Papadimitriou [J. ACM 1981] for instances of (IP) with a constant number of constraints was only recently improved upon by Eisenbrand and Weismantel [SODA 2018] and Jansen and Rohwedder [ArXiv 2018]. We continue this line of work and show that under the Exponential Time Hypothesis (ETH), the algorithm of Jansen and Rohwedder is nearly optimal. We also show that when the matrix A is assumed to be non-negative, a component of Papadimitriou's original algorithm is already nearly optimal under ETH. This motivates us to pick up the line of research initiated by Cunningham and Geelen [IPCO 2007] who studied the complexity of solving (IP) with non-negative matrices in which the number of constraints may be unbounded, but the branch-width of the column-matroid corresponding to the constraint matrix is a constant. We prove a lower bound on the complexity of solving (IP) for such instances and obtain optimal results with respect to a closely related parameter, path-width. Specifically, we prove matching upper and lower bounds for (IP) when the path-width of the corresponding column-matroid is a constant.

Cite as

Fedor V. Fomin, Fahad Panolan, M. S. Ramanujan, and Saket Saurabh. On the Optimality of Pseudo-polynomial Algorithms for Integer Programming. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 31:1-31:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2018.31,
  author =	{Fomin, Fedor V. and Panolan, Fahad and Ramanujan, M. S. and Saurabh, Saket},
  title =	{{On the Optimality of Pseudo-polynomial Algorithms for Integer Programming}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{31:1--31:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.31},
  URN =		{urn:nbn:de:0030-drops-94949},
  doi =		{10.4230/LIPIcs.ESA.2018.31},
  annote =	{Keywords: Integer Programming, Strong Exponential Time Hypothesis, Branch-width of a matrix, Fine-grained Complexity}
}
Document
Symmetry Exploitation for Online Machine Covering with Bounded Migration

Authors: Waldo Gálvez, José A. Soto, and José Verschae


Abstract
Online models that allow recourse are highly effective in situations where classical models are too pessimistic. One such problem is the online machine covering problem on identical machines. In this setting, jobs arrive one by one and must be assigned to machines with the objective of maximizing the minimum machine load. When a job arrives, we are allowed to reassign some jobs as long as their total size is (at most) proportional to the processing time of the arriving job. The proportionality constant is called the migration factor of the algorithm. By rounding the processing times, which yields useful structural properties for online packing and covering problems, we design first a simple (1.7 + epsilon)-competitive algorithm using a migration factor of O(1/epsilon) which maintains at every arrival a locally optimal solution with respect to the Jump neighborhood. After that, we present as our main contribution a more involved (4/3+epsilon)-competitive algorithm using a migration factor of O~(1/epsilon^3). At every arrival, we run an adaptation of the Largest Processing Time first (LPT) algorithm. Since the new job can cause a complete change of the assignment of smaller jobs in both cases, a low migration factor is achieved by carefully exploiting the highly symmetric structure obtained by the rounding procedure.

Cite as

Waldo Gálvez, José A. Soto, and José Verschae. Symmetry Exploitation for Online Machine Covering with Bounded Migration. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{galvez_et_al:LIPIcs.ESA.2018.32,
  author =	{G\'{a}lvez, Waldo and Soto, Jos\'{e} A. and Verschae, Jos\'{e}},
  title =	{{Symmetry Exploitation for Online Machine Covering with Bounded Migration}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.32},
  URN =		{urn:nbn:de:0030-drops-94959},
  doi =		{10.4230/LIPIcs.ESA.2018.32},
  annote =	{Keywords: Machine Covering, Bounded Migration, Online, Scheduling, LPT}
}
Document
Edit Distance with Block Operations

Authors: Michal Ganczorz, Pawel Gawrychowski, Artur Jez, and Tomasz Kociumaka


Abstract
We consider the problem of edit distance in which block operations are allowed, i.e. we ask for the minimal number of (block) operations that are needed to transform a string s to t. We give O(log n) approximation algorithms, where n is the total length of the input strings, for the variants of the problem which allow the following sets of operations: block move; block move and block delete; block move and block copy; block move, block copy, and block uncopy. The results still hold if we additionally allow any of the following operations: character insert, character delete, block reversal, or block involution (involution is a generalisation of the reversal). Previously, algorithms only for the first and last variant were known, and they had approximation ratios O(log n log^*n) and O(log n (log^*n)^2), respectively. The edit distance with block moves is equivalent, up to a constant factor, to the common string partition problem, in which we are given two strings s, t and the goal is to partition s into minimal number of parts such that they can be permuted in order to obtain t. Thus we also obtain an O(log n) approximation for this problem (compared to the previous O(log n log^* n)). The results use a simplification of the previously used technique of locally consistent parsing, which groups short substrings of a string into phrases so that similar substrings are guaranteed to be grouped in a similar way. Instead of a sophisticated parsing technique relying on a deterministic coin tossing, we use a simple one based on a partition of the alphabet into two subalphabets. In particular, this lowers the running time from O(n log^* n) to O(n). The new algorithms (for block copy or block delete) use a similar algorithm, but the analysis is based on a specially tuned combinatorial function on sets of numbers.

Cite as

Michal Ganczorz, Pawel Gawrychowski, Artur Jez, and Tomasz Kociumaka. Edit Distance with Block Operations. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ganczorz_et_al:LIPIcs.ESA.2018.33,
  author =	{Ganczorz, Michal and Gawrychowski, Pawel and Jez, Artur and Kociumaka, Tomasz},
  title =	{{Edit Distance with Block Operations}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.33},
  URN =		{urn:nbn:de:0030-drops-94963},
  doi =		{10.4230/LIPIcs.ESA.2018.33},
  annote =	{Keywords: Edit distance, Block operations, Common string partition}
}
Document
A QPTAS for Gapless MEC

Authors: Shilpa Garg and Tobias Mömke


Abstract
We consider the problem Minimum Error Correction (MEC). A MEC instance is an n x m matrix M with entries from {0,1,-}. Feasible solutions are composed of two binary m-bit strings, together with an assignment of each row of M to one of the two strings. The objective is to minimize the number of mismatches (errors) where the row has a value that differs from the assigned solution string. The symbol "-" is a wildcard that matches both 0 and 1. A MEC instance is gapless, if in each row of M all binary entries are consecutive. Gapless-MEC is a relevant problem in computational biology, and it is closely related to segmentation problems that were introduced by {[}Kleinberg-Papadimitriou-Raghavan STOC'98{]} in the context of data mining. Without restrictions, it is known to be UG-hard to compute an O(1)-approximate solution to MEC. For both MEC and Gapless-MEC, the best polynomial time approximation algorithm has a logarithmic performance guarantee. We partially settle the approximation status of Gapless-MEC by providing a quasi-polynomial time approximation scheme (QPTAS). Additionally, for the relevant case where the binary part of a row is not contained in the binary part of another row, we provide a polynomial time approximation scheme (PTAS).

Cite as

Shilpa Garg and Tobias Mömke. A QPTAS for Gapless MEC. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ESA.2018.34,
  author =	{Garg, Shilpa and M\"{o}mke, Tobias},
  title =	{{A QPTAS for Gapless MEC}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.34},
  URN =		{urn:nbn:de:0030-drops-94978},
  doi =		{10.4230/LIPIcs.ESA.2018.34},
  annote =	{Keywords: approximation algorithms, QPTAS, minimum error correction, segmentation, computational biology}
}
Document
FPT Algorithms for Embedding into Low Complexity Graphic Metrics

Authors: Arijit Ghosh, Sudeshna Kolay, and Gopinath Mishra


Abstract
The Metric Embedding problem takes as input two metric spaces (X,D_X) and (Y,D_Y), and a positive integer d. The objective is to determine whether there is an embedding F:X - > Y such that the distortion d_{F} <= d. Such an embedding is called a distortion d embedding. In parameterized complexity, the Metric Embedding problem is known to be W-hard and therefore, not expected to have an FPT algorithm. In this paper, we consider the Gen-Graph Metric Embedding problem, where the two metric spaces are graph metrics. We explore the extent of tractability of the problem in the parameterized complexity setting. We determine whether an unweighted graph metric (G,D_G) can be embedded, or bijectively embedded, into another unweighted graph metric (H,D_H), where the graph H has low structural complexity. For example, H is a cycle, or H has bounded treewidth or bounded connected treewidth. The parameters for the algorithms are chosen from the upper bound d on distortion, bound Delta on the maximum degree of H, treewidth alpha of H, and the connected treewidth alpha_{c} of H. Our general approach to these problems can be summarized as trying to understand the behavior of the shortest paths in G under a low distortion embedding into H, and the structural relation the mapping of these paths has to shortest paths in H.

Cite as

Arijit Ghosh, Sudeshna Kolay, and Gopinath Mishra. FPT Algorithms for Embedding into Low Complexity Graphic Metrics. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 35:1-35:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ESA.2018.35,
  author =	{Ghosh, Arijit and Kolay, Sudeshna and Mishra, Gopinath},
  title =	{{FPT Algorithms for Embedding into Low Complexity Graphic Metrics}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{35:1--35:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.35},
  URN =		{urn:nbn:de:0030-drops-94988},
  doi =		{10.4230/LIPIcs.ESA.2018.35},
  annote =	{Keywords: Metric spaces, metric embedding, FPT, dynamic programming}
}
Document
The Stochastic Score Classification Problem

Authors: Dimitrios Gkenosis, Nathaniel Grammel, Lisa Hellerstein, and Devorah Kletenik


Abstract
Consider the following Stochastic Score Classification Problem. A doctor is assessing a patient's risk of developing a certain disease, and can perform n tests on the patient. Each test has a binary outcome, positive or negative. A positive result is an indication of risk, and a patient's score is the total number of positive test results. Test results are accurate. The doctor needs to classify the patient into one of B risk classes, depending on the score (e.g., LOW, MEDIUM, and HIGH risk). Each of these classes corresponds to a contiguous range of scores. Test i has probability p_i of being positive, and it costs c_i to perform. To reduce costs, instead of performing all tests, the doctor will perform them sequentially and stop testing when it is possible to determine the patient's risk category. The problem is to determine the order in which the doctor should perform the tests, so as to minimize expected testing cost. We provide approximation algorithms for adaptive and non-adaptive versions of this problem, and pose a number of open questions.

Cite as

Dimitrios Gkenosis, Nathaniel Grammel, Lisa Hellerstein, and Devorah Kletenik. The Stochastic Score Classification Problem. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gkenosis_et_al:LIPIcs.ESA.2018.36,
  author =	{Gkenosis, Dimitrios and Grammel, Nathaniel and Hellerstein, Lisa and Kletenik, Devorah},
  title =	{{The Stochastic Score Classification Problem}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.36},
  URN =		{urn:nbn:de:0030-drops-94990},
  doi =		{10.4230/LIPIcs.ESA.2018.36},
  annote =	{Keywords: approximation algorithms, symmetric Boolean functions, stochastic probing, sequential testing, adaptivity}
}
Document
Improved Space-Time Tradeoffs for kSUM

Authors: Isaac Goldstein, Moshe Lewenstein, and Ely Porat


Abstract
In the kSUM problem we are given an array of numbers a_1,a_2,...,a_n and we are required to determine if there are k different elements in this array such that their sum is 0. This problem is a parameterized version of the well-studied SUBSET-SUM problem, and a special case is the 3SUM problem that is extensively used for proving conditional hardness. Several works investigated the interplay between time and space in the context of SUBSET-SUM. Recently, improved time-space tradeoffs were proven for kSUM using both randomized and deterministic algorithms. In this paper we obtain an improvement over the best known results for the time-space tradeoff for kSUM. A major ingredient in achieving these results is a general self-reduction from kSUM to mSUM where m<k, and several useful observations that enable this reduction and its implications. The main results we prove in this paper include the following: (i) The best known Las Vegas solution to kSUM running in approximately O(n^{k-delta sqrt{2k}}) time and using O(n^{delta}) space, for 0 <= delta <= 1. (ii) The best known deterministic solution to kSUM running in approximately O(n^{k-delta sqrt{k}}) time and using O(n^{delta}) space, for 0 <= delta <= 1. (iii) A space-time tradeoff for solving kSUM using O(n^{delta}) space, for delta>1. (iv) An algorithm for 6SUM running in O(n^4) time using just O(n^{2/3}) space. (v) A solution to 3SUM on random input using O(n^2) time and O(n^{1/3}) space, under the assumption of a random read-only access to random bits.

Cite as

Isaac Goldstein, Moshe Lewenstein, and Ely Porat. Improved Space-Time Tradeoffs for kSUM. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 37:1-37:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goldstein_et_al:LIPIcs.ESA.2018.37,
  author =	{Goldstein, Isaac and Lewenstein, Moshe and Porat, Ely},
  title =	{{Improved Space-Time Tradeoffs for kSUM}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{37:1--37:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.37},
  URN =		{urn:nbn:de:0030-drops-95000},
  doi =		{10.4230/LIPIcs.ESA.2018.37},
  annote =	{Keywords: kSUM, space-time tradeoff, self-reduction}
}
Document
Dynamic Trees with Almost-Optimal Access Cost

Authors: Mordecai Golin, John Iacono, Stefan Langerman, J. Ian Munro, and Yakov Nekrich


Abstract
An optimal binary search tree for an access sequence on elements is a static tree that minimizes the total search cost. Constructing perfectly optimal binary search trees is expensive so the most efficient algorithms construct almost optimal search trees. There exists a long literature of constructing almost optimal search trees dynamically, i.e., when the access pattern is not known in advance. All of these trees, e.g., splay trees and treaps, provide a multiplicative approximation to the optimal search cost. In this paper we show how to maintain an almost optimal weighted binary search tree under access operations and insertions of new elements where the approximation is an additive constant. More technically, we maintain a tree in which the depth of the leaf holding an element e_i does not exceed min(log(W/w_i),log n)+O(1) where w_i is the number of times e_i was accessed and W is the total length of the access sequence. Our techniques can also be used to encode a sequence of m symbols with a dynamic alphabetic code in O(m) time so that the encoding length is bounded by m(H+O(1)), where H is the entropy of the sequence. This is the first efficient algorithm for adaptive alphabetic coding that runs in constant time per symbol.

Cite as

Mordecai Golin, John Iacono, Stefan Langerman, J. Ian Munro, and Yakov Nekrich. Dynamic Trees with Almost-Optimal Access Cost. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 38:1-38:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{golin_et_al:LIPIcs.ESA.2018.38,
  author =	{Golin, Mordecai and Iacono, John and Langerman, Stefan and Munro, J. Ian and Nekrich, Yakov},
  title =	{{Dynamic Trees with Almost-Optimal Access Cost}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{38:1--38:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.38},
  URN =		{urn:nbn:de:0030-drops-95017},
  doi =		{10.4230/LIPIcs.ESA.2018.38},
  annote =	{Keywords: Data Structures, Binary Search Trees, Adaptive Alphabetic Coding}
}
Document
A Tree Structure For Dynamic Facility Location

Authors: Gramoz Goranci, Monika Henzinger, and Dariusz Leniowski


Abstract
We study the metric facility location problem with client insertions and deletions. This setting differs from the classic dynamic facility location problem, where the set of clients remains the same, but the metric space can change over time. We show a deterministic algorithm that maintains a constant factor approximation to the optimal solution in worst-case time O~(2^{O(kappa^2)}) per client insertion or deletion in metric spaces while answering queries about the cost in O(1) time, where kappa denotes the doubling dimension of the metric. For metric spaces with bounded doubling dimension, the update time is polylogarithmic in the parameters of the problem.

Cite as

Gramoz Goranci, Monika Henzinger, and Dariusz Leniowski. A Tree Structure For Dynamic Facility Location. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 39:1-39:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ESA.2018.39,
  author =	{Goranci, Gramoz and Henzinger, Monika and Leniowski, Dariusz},
  title =	{{A Tree Structure For Dynamic Facility Location}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{39:1--39:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.39},
  URN =		{urn:nbn:de:0030-drops-95026},
  doi =		{10.4230/LIPIcs.ESA.2018.39},
  annote =	{Keywords: facility location, dynamic algorithm, approximation, doubling dimension}
}
Document
Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs

Authors: Gramoz Goranci, Monika Henzinger, and Pan Peng


Abstract
We consider the problem of dynamically maintaining (approximate) all-pairs effective resistances in separable graphs, which are those that admit an n^{c}-separator theorem for some c<1. We give a fully dynamic algorithm that maintains (1+epsilon)-approximations of the all-pairs effective resistances of an n-vertex graph G undergoing edge insertions and deletions with O~(sqrt{n}/epsilon^2) worst-case update time and O~(sqrt{n}/epsilon^2) worst-case query time, if G is guaranteed to be sqrt{n}-separable (i.e., it is taken from a class satisfying a sqrt{n}-separator theorem) and its separator can be computed in O~(n) time. Our algorithm is built upon a dynamic algorithm for maintaining approximate Schur complement that approximately preserves pairwise effective resistances among a set of terminals for separable graphs, which might be of independent interest. We complement our result by proving that for any two fixed vertices s and t, no incremental or decremental algorithm can maintain the s-t effective resistance for sqrt{n}-separable graphs with worst-case update time O(n^{1/2-delta}) and query time O(n^{1-delta}) for any delta>0, unless the Online Matrix Vector Multiplication (OMv) conjecture is false. We further show that for general graphs, no incremental or decremental algorithm can maintain the s-t effective resistance problem with worst-case update time O(n^{1-delta}) and query-time O(n^{2-delta}) for any delta >0, unless the OMv conjecture is false.

Cite as

Gramoz Goranci, Monika Henzinger, and Pan Peng. Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 40:1-40:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goranci_et_al:LIPIcs.ESA.2018.40,
  author =	{Goranci, Gramoz and Henzinger, Monika and Peng, Pan},
  title =	{{Dynamic Effective Resistances and Approximate Schur Complement on Separable Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{40:1--40:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.40},
  URN =		{urn:nbn:de:0030-drops-95036},
  doi =		{10.4230/LIPIcs.ESA.2018.40},
  annote =	{Keywords: Dynamic graph algorithms, effective resistance, separable graphs, Schur complement, conditional lower bounds}
}
Document
Buffered Count-Min Sketch on SSD: Theory and Experiments

Authors: Mayank Goswami, Dzejla Medjedovic, Emina Mekic, and Prashant Pandey


Abstract
Frequency estimation data structures such as the count-min sketch (CMS) have found numerous applications in databases, networking, computational biology and other domains. Many applications that use the count-min sketch process massive and rapidly evolving data sets. For data-intensive applications that aim to keep the overestimate error low, the count-min sketch becomes too large to store in available RAM and may have to migrate to external storage (e.g., SSD.) Due to the random-read/write nature of hash operations of the count-min sketch, simply placing it on SSD stifles the performance of time-critical applications, requiring about 4-6 random reads/writes to SSD per estimate (lookup) and update (insert) operation. In this paper, we expand on the preliminary idea of the buffered count-min sketch (BCMS) {[Eydi et al., 2017]}, an SSD variant of the count-min sketch, that uses hash localization to scale efficiently out of RAM while keeping the total error bounded. We describe the design and implementation of the buffered count-min sketch, and empirically show that our implementation achieves 3.7 x-4.7 x speedup on update and 4.3 x speedup on estimate operations compared to the traditional count-min sketch on SSD. Our design also offers an asymptotic improvement in the external-memory model over the original data structure: r random I/Os are reduced to 1 I/O for the estimate operation. For a data structure that uses k blocks on SSD, w as the word/counter size, r as the number of rows, M as the number of bits in the main memory, our data structure uses kwr/M amortized I/Os for updates, or, if kwr/M > 1, 1 I/O in the worst case. In typical scenarios, kwr/M is much smaller than 1. This is in contrast to O(r) I/Os incurred for each update in the original data structure. Lastly, we mathematically show that for the buffered count-min sketch, the error rate does not substantially degrade over the traditional count-min sketch. Specifically, we prove that for any query q, our data structure provides the guarantee: Pr[Error(q) >= n epsilon (1+o(1))] <= delta + o(1), which, up to o(1) terms, is the same guarantee as that of a traditional count-min sketch.

Cite as

Mayank Goswami, Dzejla Medjedovic, Emina Mekic, and Prashant Pandey. Buffered Count-Min Sketch on SSD: Theory and Experiments. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goswami_et_al:LIPIcs.ESA.2018.41,
  author =	{Goswami, Mayank and Medjedovic, Dzejla and Mekic, Emina and Pandey, Prashant},
  title =	{{Buffered Count-Min Sketch on SSD: Theory and Experiments}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.41},
  URN =		{urn:nbn:de:0030-drops-95042},
  doi =		{10.4230/LIPIcs.ESA.2018.41},
  annote =	{Keywords: Streaming model, Count-min sketch, Counting, Frequency, External memory, I/O efficiency, Bloom filter, Counting filter, Quotient filter}
}
Document
Scalable Katz Ranking Computation in Large Static and Dynamic Graphs

Authors: Alexander van der Grinten, Elisabetta Bergamini, Oded Green, David A. Bader, and Henning Meyerhenke


Abstract
Network analysis defines a number of centrality measures to identify the most central nodes in a network. Fast computation of those measures is a major challenge in algorithmic network analysis. Aside from closeness and betweenness, Katz centrality is one of the established centrality measures. In this paper, we consider the problem of computing rankings for Katz centrality. In particular, we propose upper and lower bounds on the Katz score of a given node. While previous approaches relied on numerical approximation or heuristics to compute Katz centrality rankings, we construct an algorithm that iteratively improves those upper and lower bounds until a correct Katz ranking is obtained. We extend our algorithm to dynamic graphs while maintaining its correctness guarantees. Experiments demonstrate that our static graph algorithm outperforms both numerical approaches and heuristics with speedups between 1.5 x and 3.5 x, depending on the desired quality guarantees. Our dynamic graph algorithm improves upon the static algorithm for update batches of less than 10000 edges. We provide efficient parallel CPU and GPU implementations of our algorithms that enable near real-time Katz centrality computation for graphs with hundreds of millions of nodes in fractions of seconds.

Cite as

Alexander van der Grinten, Elisabetta Bergamini, Oded Green, David A. Bader, and Henning Meyerhenke. Scalable Katz Ranking Computation in Large Static and Dynamic Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 42:1-42:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{vandergrinten_et_al:LIPIcs.ESA.2018.42,
  author =	{van der Grinten, Alexander and Bergamini, Elisabetta and Green, Oded and Bader, David A. and Meyerhenke, Henning},
  title =	{{Scalable Katz Ranking Computation in Large Static and Dynamic Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{42:1--42:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.42},
  URN =		{urn:nbn:de:0030-drops-95055},
  doi =		{10.4230/LIPIcs.ESA.2018.42},
  annote =	{Keywords: network analysis, Katz centrality, top-k ranking, dynamic graphs, parallel algorithms}
}
Document
Round-Hashing for Data Storage: Distributed Servers and External-Memory Tables

Authors: Roberto Grossi and Luca Versari


Abstract
This paper proposes round-hashing, which is suitable for data storage on distributed servers and for implementing external-memory tables in which each lookup retrieves at most one single block of external memory, using a stash. For data storage, round-hashing is like consistent hashing as it avoids a full rehashing of the keys when new servers are added. Experiments show that the speed to serve requests is tenfold or more than the state of the art. In distributed data storage, this guarantees better throughput for serving requests and, moreover, greatly reduces decision times for which data should move to new servers as rescanning data is much faster.

Cite as

Roberto Grossi and Luca Versari. Round-Hashing for Data Storage: Distributed Servers and External-Memory Tables. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grossi_et_al:LIPIcs.ESA.2018.43,
  author =	{Grossi, Roberto and Versari, Luca},
  title =	{{Round-Hashing for Data Storage: Distributed Servers and External-Memory Tables}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.43},
  URN =		{urn:nbn:de:0030-drops-95062},
  doi =		{10.4230/LIPIcs.ESA.2018.43},
  annote =	{Keywords: consistent hashing, external memory, hash tables}
}
Document
Algorithmic Building Blocks for Asymmetric Memories

Authors: Yan Gu, Yihan Sun, and Guy E. Blelloch


Abstract
The future of main memory appears to lie in the direction of new non-volatile memory technologies that provide strong capacity-to-performance ratios, but have write operations that are much more expensive than reads in terms of energy, bandwidth, and latency. This asymmetry can have a significant effect on algorithm design, and in many cases it is possible to reduce writes at the cost of more reads. This paper studies which algorithmic techniques are useful in designing practical write-efficient algorithms. We focus on several fundamental algorithmic building blocks including unordered set/map implemented using hash tables, comparison sort, and graph traversal algorithms including breadth-first search and Dijkstra's algorithm. We introduce new algorithms and implementations that can reduce writes, and analyze the performance experimentally using a software simulator. Finally, we summarize interesting lessons and directions in designing write-efficient algorithms that can be valuable to share.

Cite as

Yan Gu, Yihan Sun, and Guy E. Blelloch. Algorithmic Building Blocks for Asymmetric Memories. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 44:1-44:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gu_et_al:LIPIcs.ESA.2018.44,
  author =	{Gu, Yan and Sun, Yihan and Blelloch, Guy E.},
  title =	{{Algorithmic Building Blocks for Asymmetric Memories}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{44:1--44:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.44},
  URN =		{urn:nbn:de:0030-drops-95070},
  doi =		{10.4230/LIPIcs.ESA.2018.44},
  annote =	{Keywords: Asymmetric Memory, I/O Cost, Write-Efficient Algorithms, Hash Tables, Graph-Traversal Algorithms}
}
Document
On the Decision Tree Complexity of String Matching

Authors: Xiaoyu He, Neng Huang, and Xiaoming Sun


Abstract
String matching is one of the most fundamental problems in computer science. A natural problem is to determine the number of characters that need to be queried (i.e. the decision tree complexity) in a string in order to decide whether this string contains a certain pattern. Rivest showed that for every pattern p, in the worst case any deterministic algorithm needs to query at least n-|p|+1 characters, where n is the length of the string and |p| is the length of the pattern. He further conjectured that this bound is tight. By using the adversary method, Tuza disproved this conjecture and showed that more than one half of binary patterns are evasive, i.e. any algorithm needs to query all the characters (see Section 1.1 for more details). In this paper, we give a query algorithm which settles the decision tree complexity of string matching except for a negligible fraction of patterns. Our algorithm shows that Tuza's criteria of evasive patterns are almost complete. Using the algebraic approach of Rivest and Vuillemin, we also give a new sufficient condition for the evasiveness of patterns, which is beyond Tuza's criteria. In addition, our result reveals an interesting connection to Skolem's Problem in mathematics.

Cite as

Xiaoyu He, Neng Huang, and Xiaoming Sun. On the Decision Tree Complexity of String Matching. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{he_et_al:LIPIcs.ESA.2018.45,
  author =	{He, Xiaoyu and Huang, Neng and Sun, Xiaoming},
  title =	{{On the Decision Tree Complexity of String Matching}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.45},
  URN =		{urn:nbn:de:0030-drops-95082},
  doi =		{10.4230/LIPIcs.ESA.2018.45},
  annote =	{Keywords: String Matching, Decision Tree Complexity, Boolean Function, Algebraic Method}
}
Document
Decremental SPQR-trees for Planar Graphs

Authors: Jacob Holm, Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, and Eva Rotenberg


Abstract
We present a decremental data structure for maintaining the SPQR-tree of a planar graph subject to edge contractions and deletions. The update time, amortized over Omega(n) operations, is O(log^2 n). Via SPQR-trees, we give a decremental data structure for maintaining 3-vertex connectivity in planar graphs. It answers queries in O(1) time and processes edge deletions and contractions in O(log^2 n) amortized time. The previous best supported deletions and insertions in O(sqrt{n}) time.

Cite as

Jacob Holm, Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, and Eva Rotenberg. Decremental SPQR-trees for Planar Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{holm_et_al:LIPIcs.ESA.2018.46,
  author =	{Holm, Jacob and Italiano, Giuseppe F. and Karczmarz, Adam and Lacki, Jakub and Rotenberg, Eva},
  title =	{{Decremental SPQR-trees for Planar Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.46},
  URN =		{urn:nbn:de:0030-drops-95091},
  doi =		{10.4230/LIPIcs.ESA.2018.46},
  annote =	{Keywords: Graph embeddings, data structures, graph algorithms, planar graphs, SPQR-trees, triconnectivity}
}
Document
Computing the Chromatic Number Using Graph Decompositions via Matrix Rank

Authors: Bart M. P. Jansen and Jesper Nederlof


Abstract
Computing the smallest number q such that the vertices of a given graph can be properly q-colored is one of the oldest and most fundamental problems in combinatorial optimization. The q-Coloring problem has been studied intensively using the framework of parameterized algorithmics, resulting in a very good understanding of the best-possible algorithms for several parameterizations based on the structure of the graph. For example, algorithms are known to solve the problem on graphs of treewidth {tw} in time O^*(q^{tw}), while a running time of O^*((q-epsilon)^{tw}) is impossible assuming the Strong Exponential Time Hypothesis (SETH). While there is an abundance of work for parameterizations based on decompositions of the graph by vertex separators, almost nothing is known about parameterizations based on edge separators. We fill this gap by studying q-Coloring parameterized by cutwidth, and parameterized by pathwidth in bounded-degree graphs. Our research uncovers interesting new ways to exploit small edge separators. We present two algorithms for q-Coloring parameterized by cutwidth {ctw}: a deterministic one that runs in time O^*(2^{omega * {ctw}}), where omega is the matrix multiplication constant, and a randomized one with runtime O^*(2^{{ctw}}). In sharp contrast to earlier work, the running time is independent of q. The dependence on cutwidth is optimal: we prove that even 3-Coloring cannot be solved in O^*((2-epsilon)^{{ctw}}) time assuming SETH. Our algorithms rely on a new rank bound for a matrix that describes compatible colorings. Combined with a simple communication protocol for evaluating a product of two polynomials, this also yields an O^*((floor[d/2]+1)^{{pw}}) time randomized algorithm for q-Coloring on graphs of pathwidth {pw} and maximum degree d. Such a runtime was first obtained by Björklund, but only for graphs with few proper colorings. We also prove that this result is optimal in the sense that no O^*((floor[d/2]+1-epsilon)^{{pw}})-time algorithm exists assuming SETH.

Cite as

Bart M. P. Jansen and Jesper Nederlof. Computing the Chromatic Number Using Graph Decompositions via Matrix Rank. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 47:1-47:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2018.47,
  author =	{Jansen, Bart M. P. and Nederlof, Jesper},
  title =	{{Computing the Chromatic Number Using Graph Decompositions via Matrix Rank}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{47:1--47:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.47},
  URN =		{urn:nbn:de:0030-drops-95104},
  doi =		{10.4230/LIPIcs.ESA.2018.47},
  annote =	{Keywords: Parameterized Complexity, Chromatic Number, Graph Decompositions}
}
Document
Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations

Authors: Bart M. P. Jansen and Astrid Pieterse


Abstract
We investigate polynomial-time preprocessing for the problem of hitting forbidden minors in a graph, using the framework of kernelization. For a fixed finite set of graphs F, the F-Deletion problem is the following: given a graph G and integer k, is it possible to delete k vertices from G to ensure the resulting graph does not contain any graph from F as a minor? Earlier work by Fomin, Lokshtanov, Misra, and Saurabh [FOCS'12] showed that when F contains a planar graph, an instance (G,k) can be reduced in polynomial time to an equivalent one of size k^{O(1)}. In this work we focus on structural measures of the complexity of an instance, with the aim of giving nontrivial preprocessing guarantees for instances whose solutions are large. Motivated by several impossibility results, we parameterize the F-Deletion problem by the size of a vertex modulator whose removal results in a graph of constant treedepth eta. We prove that for each set F of connected graphs and constant eta, the F-Deletion problem parameterized by the size of a treedepth-eta modulator has a polynomial kernel. Our kernelization is fully explicit and does not depend on protrusion reduction or well-quasi-ordering, which are sources of algorithmic non-constructivity in earlier works on F-Deletion. Our main technical contribution is to analyze how models of a forbidden minor in a graph G with modulator X, interact with the various connected components of G-X. Using the language of labeled minors, we analyze the fragments of potential forbidden minor models that can remain after removing an optimal F-Deletion solution from a single connected component of G-X. By bounding the number of different types of behavior that can occur by a polynomial in |X|, we obtain a polynomial kernel using a recursive preprocessing strategy. Our results extend earlier work for specific instances of F-Deletion such as Vertex Cover and Feedback Vertex Set. It also generalizes earlier preprocessing results for F-Deletion parameterized by a vertex cover, which is a treedepth-one modulator.

Cite as

Bart M. P. Jansen and Astrid Pieterse. Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ESA.2018.48,
  author =	{Jansen, Bart M. P. and Pieterse, Astrid},
  title =	{{Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{48:1--48:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.48},
  URN =		{urn:nbn:de:0030-drops-95119},
  doi =		{10.4230/LIPIcs.ESA.2018.48},
  annote =	{Keywords: Kernelization, F-minor free deletion, Treedepth modulator, Structural parameterization}
}
Document
Quantum Algorithms for Connectivity and Related Problems

Authors: Michael Jarret, Stacey Jeffery, Shelby Kimmel, and Alvaro Piedrafita


Abstract
An important family of span programs, st-connectivity span programs, have been used to design quantum algorithms in various contexts, including a number of graph problems and formula evaluation problems. The complexity of the resulting algorithms depends on the largest positive witness size of any 1-input, and the largest negative witness size of any 0-input. Belovs and Reichardt first showed that the positive witness size is exactly characterized by the effective resistance of the input graph, but only rough upper bounds were known previously on the negative witness size. We show that the negative witness size in an st-connectivity span program is exactly characterized by the capacitance of the input graph. This gives a tight analysis for algorithms based on st-connectivity span programs on any set of inputs. We use this analysis to give a new quantum algorithm for estimating the capacitance of a graph. We also describe a new quantum algorithm for deciding if a graph is connected, which improves the previous best quantum algorithm for this problem if we're promised that either the graph has at least kappa>1 components, or the graph is connected and has small average resistance, which is upper bounded by the diameter. We also give an alternative algorithm for deciding if a graph is connected that can be better than our first algorithm when the maximum degree is small. Finally, using ideas from our second connectivity algorithm, we give an algorithm for estimating the algebraic connectivity of a graph, the second largest eigenvalue of the Laplacian.

Cite as

Michael Jarret, Stacey Jeffery, Shelby Kimmel, and Alvaro Piedrafita. Quantum Algorithms for Connectivity and Related Problems. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jarret_et_al:LIPIcs.ESA.2018.49,
  author =	{Jarret, Michael and Jeffery, Stacey and Kimmel, Shelby and Piedrafita, Alvaro},
  title =	{{Quantum Algorithms for Connectivity and Related Problems}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.49},
  URN =		{urn:nbn:de:0030-drops-95121},
  doi =		{10.4230/LIPIcs.ESA.2018.49},
  annote =	{Keywords: Electrical networks, Quantum algorithms, Span programs, Connectivity, Graph theory}
}
Document
Generalized Coloring of Permutations

Authors: Vít Jelínek, Michal Opler, and Pavel Valtr


Abstract
A permutation pi is a merge of a permutation sigma and a permutation tau, if we can color the elements of pi red and blue so that the red elements have the same relative order as sigma and the blue ones as tau. We consider, for fixed hereditary permutation classes C and D, the complexity of determining whether a given permutation pi is a merge of an element of C with an element of D. We develop general algorithmic approaches for identifying polynomially tractable cases of merge recognition. Our tools include a version of nondeterministic logspace streaming recognizability of permutations, which we introduce, and a concept of bounded width decomposition, inspired by the work of Ahal and Rabinovich. As a consequence of the general results, we can provide nontrivial examples of tractable permutation merges involving commonly studied permutation classes, such as the class of layered permutations, the class of separable permutations, or the class of permutations avoiding a decreasing sequence of a given length. On the negative side, we obtain a general hardness result which implies, for example, that it is NP-complete to recognize the permutations that can be merged from two subpermutations avoiding the pattern 2413.

Cite as

Vít Jelínek, Michal Opler, and Pavel Valtr. Generalized Coloring of Permutations. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jelinek_et_al:LIPIcs.ESA.2018.50,
  author =	{Jel{\'\i}nek, V{\'\i}t and Opler, Michal and Valtr, Pavel},
  title =	{{Generalized Coloring of Permutations}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.50},
  URN =		{urn:nbn:de:0030-drops-95137},
  doi =		{10.4230/LIPIcs.ESA.2018.50},
  annote =	{Keywords: Permutations, merge, generalized coloring}
}
Document
Solving Partition Problems Almost Always Requires Pushing Many Vertices Around

Authors: Iyad Kanj, Christian Komusiewicz, Manuel Sorge, and Erik Jan van Leeuwen


Abstract
A fundamental graph problem is to recognize whether the vertex set of a graph G can be bipartitioned into sets A and B such that G[A] and G[B] satisfy properties Pi_A and Pi_B, respectively. This so-called (Pi_A,Pi_B)-Recognition problem generalizes amongst others the recognition of 3-colorable, bipartite, split, and monopolar graphs. A powerful algorithmic technique that can be used to obtain fixed-parameter algorithms for many cases of (Pi_A,Pi_B)-Recognition, as well as several other problems, is the pushing process. For bipartition problems, the process starts with an "almost correct" bipartition (A',B'), and pushes appropriate vertices from A' to B' and vice versa to eventually arrive at a correct bipartition. In this paper, we study whether (Pi_A,Pi_B)-Recognition problems for which the pushing process yields fixed-parameter algorithms also admit polynomial problem kernels. In our study, we focus on the first level above triviality, where Pi_A is the set of P_3-free graphs (disjoint unions of cliques, or cluster graphs), the parameter is the number of clusters in the cluster graph G[A], and Pi_B is characterized by a set H of connected forbidden induced subgraphs. We prove that, under the assumption that NP not subseteq coNP/poly, (Pi_A,Pi_B)-Recognition admits a polynomial kernel if and only if H contains a graph of order at most 2. In both the kernelization and the lower bound results, we make crucial use of the pushing process.

Cite as

Iyad Kanj, Christian Komusiewicz, Manuel Sorge, and Erik Jan van Leeuwen. Solving Partition Problems Almost Always Requires Pushing Many Vertices Around. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kanj_et_al:LIPIcs.ESA.2018.51,
  author =	{Kanj, Iyad and Komusiewicz, Christian and Sorge, Manuel and van Leeuwen, Erik Jan},
  title =	{{Solving Partition Problems Almost Always Requires Pushing Many Vertices Around}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.51},
  URN =		{urn:nbn:de:0030-drops-95140},
  doi =		{10.4230/LIPIcs.ESA.2018.51},
  annote =	{Keywords: Fixed-parameter algorithms, Kernelization, Vertex-partition problems, Reduction rules, Cross-composition}
}
Document
String Attractors: Verification and Optimization

Authors: Dominik Kempa, Alberto Policriti, Nicola Prezza, and Eva Rotenberg


Abstract
String attractors [STOC 2018] are combinatorial objects recently introduced to unify all known dictionary compression techniques in a single theory. A set Gamma subseteq [1..n] is a k-attractor for a string S in Sigma^n if and only if every distinct substring of S of length at most k has an occurrence crossing at least one of the positions in Gamma. Finding the smallest k-attractor is NP-hard for k >= 3, but polylogarithmic approximations can be found using reductions from dictionary compressors. It is easy to reduce the k-attractor problem to a set-cover instance where the string's positions are interpreted as sets of substrings. The main result of this paper is a much more powerful reduction based on the truncated suffix tree. Our new characterization of the problem leads to more efficient algorithms for string attractors: we show how to check the validity and minimality of a k-attractor in near-optimal time and how to quickly compute exact solutions. For example, we prove that a minimum 3-attractor can be found in O(n) time when |Sigma| in O(sqrt[3+epsilon]{log n}) for some constant epsilon > 0, despite the problem being NP-hard for large Sigma.

Cite as

Dominik Kempa, Alberto Policriti, Nicola Prezza, and Eva Rotenberg. String Attractors: Verification and Optimization. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 52:1-52:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kempa_et_al:LIPIcs.ESA.2018.52,
  author =	{Kempa, Dominik and Policriti, Alberto and Prezza, Nicola and Rotenberg, Eva},
  title =	{{String Attractors: Verification and Optimization}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{52:1--52:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.52},
  URN =		{urn:nbn:de:0030-drops-95153},
  doi =		{10.4230/LIPIcs.ESA.2018.52},
  annote =	{Keywords: Dictionary compression, String attractors, Set cover}
}
Document
Data Reduction for Maximum Matching on Real-World Graphs: Theory and Experiments

Authors: Viatcheslav Korenwein, André Nichterlein, Rolf Niedermeier, and Philipp Zschoche


Abstract
Finding a maximum-cardinality or maximum-weight matching in (edge-weighted) undirected graphs is among the most prominent problems of algorithmic graph theory. For n-vertex and m-edge graphs, the best known algorithms run in O~(m sqrt{n}) time. We build on recent theoretical work focusing on linear-time data reduction rules for finding maximum-cardinality matchings and complement the theoretical results by presenting and analyzing (thereby employing the kernelization methodology of parameterized complexity analysis) linear-time data reduction rules for the positive-integer-weighted case. Moreover, we experimentally demonstrate that these data reduction rules provide significant speedups of the state-of-the art implementation for computing matchings in real-world graphs: the average speedup is 3800% in the unweighted case and "just" 30% in the weighted case.

Cite as

Viatcheslav Korenwein, André Nichterlein, Rolf Niedermeier, and Philipp Zschoche. Data Reduction for Maximum Matching on Real-World Graphs: Theory and Experiments. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 53:1-53:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{korenwein_et_al:LIPIcs.ESA.2018.53,
  author =	{Korenwein, Viatcheslav and Nichterlein, Andr\'{e} and Niedermeier, Rolf and Zschoche, Philipp},
  title =	{{Data Reduction for Maximum Matching on Real-World Graphs: Theory and Experiments}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{53:1--53:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.53},
  URN =		{urn:nbn:de:0030-drops-95169},
  doi =		{10.4230/LIPIcs.ESA.2018.53},
  annote =	{Keywords: Maximum-cardinality matching, maximum-weight matching, linear-time algorithms, preprocessing, kernelization, parameterized complexity analysis}
}
Document
Searching a Tree with Permanently Noisy Advice

Authors: Lucas Boczkowski, Amos Korman, and Yoav Rodeh


Abstract
We consider a search problem on trees using unreliable guiding instructions. Specifically, an agent starts a search at the root of a tree aiming to find a treasure hidden at one of the nodes by an adversary. Each visited node holds information, called advice, regarding the most promising neighbor to continue the search. However, the memory holding this information may be unreliable. Modeling this scenario, we focus on a probabilistic setting. That is, the advice at a node is a pointer to one of its neighbors. With probability q each node is faulty, independently of other nodes, in which case its advice points at an arbitrary neighbor, chosen uniformly at random. Otherwise, the node is sound and points at the correct neighbor. Crucially, the advice is permanent, in the sense that querying a node several times would yield the same answer. We evaluate efficiency by two measures: The move complexity denotes the expected number of edge traversals, and the query complexity denotes the expected number of queries. Let Delta denote the maximal degree. Roughly speaking, the main message of this paper is that a phase transition occurs when the noise parameter q is roughly 1/sqrt{Delta}. More precisely, we prove that above the threshold, every search algorithm has query complexity (and move complexity) which is both exponential in the depth d of the treasure and polynomial in the number of nodes n. Conversely, below the threshold, there exists an algorithm with move complexity O(d sqrt{Delta}), and an algorithm with query complexity O(sqrt{Delta}log Delta log^2 n). Moreover, for the case of regular trees, we obtain an algorithm with query complexity O(sqrt{Delta}log n log log n). For q that is below but close to the threshold, the bound for the move complexity is tight, and the bounds for the query complexity are not far from the lower bound of Omega(sqrt{Delta}log_Delta n). In addition, we also consider a semi-adversarial variant, in which an adversary chooses the direction of advice at faulty nodes. For this variant, the threshold for efficient moving algorithms happens when the noise parameter is roughly 1/Delta. Above this threshold a simple protocol that follows each advice with a fixed probability already achieves optimal move complexity.

Cite as

Lucas Boczkowski, Amos Korman, and Yoav Rodeh. Searching a Tree with Permanently Noisy Advice. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 54:1-54:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boczkowski_et_al:LIPIcs.ESA.2018.54,
  author =	{Boczkowski, Lucas and Korman, Amos and Rodeh, Yoav},
  title =	{{Searching a Tree with Permanently Noisy Advice}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{54:1--54:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.54},
  URN =		{urn:nbn:de:0030-drops-95176},
  doi =		{10.4230/LIPIcs.ESA.2018.54},
  annote =	{Keywords: Data structures, Graph search, Average Case Analysis}
}
Document
Efficient and Adaptive Parameterized Algorithms on Modular Decompositions

Authors: Stefan Kratsch and Florian Nelles


Abstract
We study the influence of a graph parameter called modular-width on the time complexity for optimally solving well-known polynomial problems such as Maximum Matching, Triangle Counting, and Maximum s-t Vertex-Capacitated Flow. The modular-width of a graph depends on its (unique) modular decomposition tree, and can be computed in linear time O(n+m) for graphs with n vertices and m edges. Modular decompositions are an important tool for graph algorithms, e.g., for linear-time recognition of certain graph classes. Throughout, we obtain efficient parameterized algorithms of running times O(f(mw)n+m), O(n+f(mw)m) , or O(f(mw)+n+m) for low polynomial functions f and graphs of modular-width mw. Our algorithm for Maximum Matching, running in time O(mw^2 log mw n+m), is both faster and simpler than the recent O(mw^4n+m) time algorithm of Coudert et al. (SODA 2018). For several other problems, e.g., Triangle Counting and Maximum b-Matching, we give adaptive algorithms, meaning that their running times match the best unparameterized algorithms for worst-case modular-width of mw=Theta(n) and they outperform them already for mw=o(n), until reaching linear time for mw=O(1).

Cite as

Stefan Kratsch and Florian Nelles. Efficient and Adaptive Parameterized Algorithms on Modular Decompositions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 55:1-55:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.ESA.2018.55,
  author =	{Kratsch, Stefan and Nelles, Florian},
  title =	{{Efficient and Adaptive Parameterized Algorithms on Modular Decompositions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{55:1--55:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.55},
  URN =		{urn:nbn:de:0030-drops-95187},
  doi =		{10.4230/LIPIcs.ESA.2018.55},
  annote =	{Keywords: efficient parameterized algorithms, modular-width, adaptive algorithms}
}
Document
On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress

Authors: Marvin Künnemann


Abstract
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n x n matrices can be performed in near-optimal nondeterministic time O~(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and n erroneous entries can be performed in time O~(n^2) - interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most t errors in time O~(sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most t nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t = Omega(n^{2/3}) nonzeroes, which is of independent interest.

Cite as

Marvin Künnemann. On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 56:1-56:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kunnemann:LIPIcs.ESA.2018.56,
  author =	{K\"{u}nnemann, Marvin},
  title =	{{On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{56:1--56:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.56},
  URN =		{urn:nbn:de:0030-drops-95195},
  doi =		{10.4230/LIPIcs.ESA.2018.56},
  annote =	{Keywords: matrix product verification, certifying computation, fine-grained complexity and algorithms}
}
Document
Optimal Online Contention Resolution Schemes via Ex-Ante Prophet Inequalities

Authors: Euiwoong Lee and Sahil Singla


Abstract
Online contention resolution schemes (OCRSs) were proposed by Feldman, Svensson, and Zenklusen [Moran Feldman et al., 2016] as a generic technique to round a fractional solution in the matroid polytope in an online fashion. It has found applications in several stochastic combinatorial problems where there is a commitment constraint: on seeing the value of a stochastic element, the algorithm has to immediately and irrevocably decide whether to select it while always maintaining an independent set in the matroid. Although OCRSs immediately lead to prophet inequalities, these prophet inequalities are not optimal. Can we instead use prophet inequalities to design optimal OCRSs? We design the first optimal 1/2-OCRS for matroids by reducing the problem to designing a matroid prophet inequality where we compare to the stronger benchmark of an ex-ante relaxation. We also introduce and design optimal (1-1/e)-random order CRSs for matroids, which are similar to OCRSs but the arrival order is chosen uniformly at random.

Cite as

Euiwoong Lee and Sahil Singla. Optimal Online Contention Resolution Schemes via Ex-Ante Prophet Inequalities. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 57:1-57:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ESA.2018.57,
  author =	{Lee, Euiwoong and Singla, Sahil},
  title =	{{Optimal Online Contention Resolution Schemes via Ex-Ante Prophet Inequalities}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{57:1--57:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.57},
  URN =		{urn:nbn:de:0030-drops-95208},
  doi =		{10.4230/LIPIcs.ESA.2018.57},
  annote =	{Keywords: Prophets, Contention Resolution, Stochastic Optimization, Matroids}
}
Document
Equilibrium Computation in Atomic Splittable Routing Games

Authors: Umang Bhaskar and Phani Raj Lolakapuri


Abstract
We present polynomial-time algorithms as well as hardness results for equilibrium computation in atomic splittable routing games, for the case of general convex cost functions. These games model traffic in freight transportation, market oligopolies, data networks, and various other applications. An atomic splittable routing game is played on a network where the edges have traffic-dependent cost functions, and player strategies correspond to flows in the network. A player can thus split its traffic arbitrarily among different paths. While many properties of equilibria in these games have been studied, efficient algorithms for equilibrium computation are known for only two cases: if cost functions are affine, or if players are symmetric. Neither of these conditions is met in most practical applications. We present two algorithms for routing games with general convex cost functions on parallel links. The first algorithm is exponential in the number of players, while the second is exponential in the number of edges; thus if either of these is small, we get a polynomial-time algorithm. These are the first algorithms for these games with convex cost functions. Lastly, we show that in general networks, given input C, it is NP-hard to decide if there exists an equilibrium where every player has cost at most C.

Cite as

Umang Bhaskar and Phani Raj Lolakapuri. Equilibrium Computation in Atomic Splittable Routing Games. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhaskar_et_al:LIPIcs.ESA.2018.58,
  author =	{Bhaskar, Umang and Lolakapuri, Phani Raj},
  title =	{{Equilibrium Computation in Atomic Splittable Routing Games}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.58},
  URN =		{urn:nbn:de:0030-drops-95211},
  doi =		{10.4230/LIPIcs.ESA.2018.58},
  annote =	{Keywords: Routing Games, Equilibrium Computation, Convex costs, Splittable flows}
}
Document
Online Non-Preemptive Scheduling to Minimize Weighted Flow-time on Unrelated Machines

Authors: Giorgio Lucarelli, Benjamin Moseley, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram


Abstract
In this paper, we consider the online problem of scheduling independent jobs non-preemptively so as to minimize the weighted flow-time on a set of unrelated machines. There has been a considerable amount of work on this problem in the preemptive setting where several competitive algorithms are known in the classical competitive model. However, the problem in the non-preemptive setting admits a strong lower bound. Recently, Lucarelli et al. presented an algorithm that achieves a O(1/epsilon^2)-competitive ratio when the algorithm is allowed to reject epsilon-fraction of total weight of jobs and has an epsilon-speed augmentation. They further showed that speed augmentation alone is insufficient to derive any competitive algorithm. An intriguing open question is whether there exists a scalable competitive algorithm that rejects a small fraction of total weights. In this paper, we affirmatively answer this question. Specifically, we show that there exists a O(1/epsilon^3)-competitive algorithm for minimizing weighted flow-time on a set of unrelated machine that rejects at most O(epsilon)-fraction of total weight of jobs. The design and analysis of the algorithm is based on the primal-dual technique. Our result asserts that alternative models beyond speed augmentation should be explored when designing online schedulers in the non-preemptive setting in an effort to find provably good algorithms.

Cite as

Giorgio Lucarelli, Benjamin Moseley, Nguyen Kim Thang, Abhinav Srivastav, and Denis Trystram. Online Non-Preemptive Scheduling to Minimize Weighted Flow-time on Unrelated Machines. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 59:1-59:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lucarelli_et_al:LIPIcs.ESA.2018.59,
  author =	{Lucarelli, Giorgio and Moseley, Benjamin and Thang, Nguyen Kim and Srivastav, Abhinav and Trystram, Denis},
  title =	{{Online Non-Preemptive Scheduling to Minimize Weighted Flow-time on Unrelated Machines}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{59:1--59:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.59},
  URN =		{urn:nbn:de:0030-drops-95226},
  doi =		{10.4230/LIPIcs.ESA.2018.59},
  annote =	{Keywords: Online Algorithms, Scheduling, Resource Augmentation}
}
Document
Finding Stable Matchings That Are Robust to Errors in the Input

Authors: Tung Mai and Vijay V. Vazirani


Abstract
In this paper, we introduce the issue of finding solutions to the stable matching problem that are robust to errors in the input and we obtain the first algorithmic results on this topic. In the process, we also initiate work on a new structural question concerning the stable matching problem, namely finding relationships between the lattices of solutions of two "nearby" instances. Our main algorithmic result is the following: We identify a polynomially large class of errors, D, that can be introduced in a stable matching instance. Given an instance A of stable matching, let B be the instance that results after introducing one error from D, chosen via a discrete probability distribution. The problem is to find a stable matching for A that maximizes the probability of being stable for B as well. Via new structural properties of the type described in the question stated above, we give a polynomial time algorithm for this problem.

Cite as

Tung Mai and Vijay V. Vazirani. Finding Stable Matchings That Are Robust to Errors in the Input. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 60:1-60:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{mai_et_al:LIPIcs.ESA.2018.60,
  author =	{Mai, Tung and Vazirani, Vijay V.},
  title =	{{Finding Stable Matchings That Are Robust to Errors in the Input}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{60:1--60:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.60},
  URN =		{urn:nbn:de:0030-drops-95238},
  doi =		{10.4230/LIPIcs.ESA.2018.60},
  annote =	{Keywords: Stable Matching, Robust}
}
Document
Disconnected Cuts in Claw-free Graphs

Authors: Barnaby Martin, Daniël Paulusma, and Erik Jan van Leeuwen


Abstract
A disconnected cut of a connected graph is a vertex cut that itself also induces a disconnected subgraph. The corresponding decision problem is called Disconnected Cut. It is known that Disconnected Cut is NP-hard on general graphs, while polynomial-time algorithms exist for several graph classes. However, the complexity of the problem on claw-free graphs remained an open question. Its connection to the complexity of the problem to contract a claw-free graph to the 4-vertex cycle C_4 led Ito et al. (TCS 2011) to explicitly ask to resolve this open question. We prove that Disconnected Cut is polynomial-time solvable on claw-free graphs, answering the question of Ito et al. The basis for our result is a decomposition theorem for claw-free graphs of diameter 2, which we believe is of independent interest and builds on the research line initiated by Chudnovsky and Seymour (JCTB 2007-2012) and Hermelin et al. (ICALP 2011). On our way to exploit this decomposition theorem, we characterize how disconnected cuts interact with certain cobipartite subgraphs, and prove two further algorithmic results, namely that Disconnected Cut is polynomial-time solvable on circular-arc graphs and line graphs.

Cite as

Barnaby Martin, Daniël Paulusma, and Erik Jan van Leeuwen. Disconnected Cuts in Claw-free Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{martin_et_al:LIPIcs.ESA.2018.61,
  author =	{Martin, Barnaby and Paulusma, Dani\"{e}l and van Leeuwen, Erik Jan},
  title =	{{Disconnected Cuts in Claw-free Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.61},
  URN =		{urn:nbn:de:0030-drops-95249},
  doi =		{10.4230/LIPIcs.ESA.2018.61},
  annote =	{Keywords: disconnected cut, surjective homomorphism, biclique cover, claw-freeness}
}
Document
Practical Low-Dimensional Halfspace Range Space Sampling

Authors: Michael Matheny and Jeff M. Phillips


Abstract
We develop, analyze, implement, and compare new algorithms for creating epsilon-samples of range spaces defined by halfspaces which have size sub-quadratic in 1/epsilon, and have runtime linear in the input size and near-quadratic in 1/epsilon. The key to our solution is an efficient construction of partition trees. Despite not requiring any techniques developed after the early 1990s, apparently such a result was never explicitly described. We demonstrate that our implementations, including new implementations of several variants of partition trees, do indeed run in time linear in the input, appear to run linear in output size, and observe smaller error for the same size sample compared to the ubiquitous random sample (which requires size quadratic in 1/epsilon). This result has direct applications in speeding up discrepancy evaluation, approximate range counting, and spatial anomaly detection.

Cite as

Michael Matheny and Jeff M. Phillips. Practical Low-Dimensional Halfspace Range Space Sampling. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 62:1-62:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{matheny_et_al:LIPIcs.ESA.2018.62,
  author =	{Matheny, Michael and Phillips, Jeff M.},
  title =	{{Practical Low-Dimensional Halfspace Range Space Sampling}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{62:1--62:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.62},
  URN =		{urn:nbn:de:0030-drops-95250},
  doi =		{10.4230/LIPIcs.ESA.2018.62},
  annote =	{Keywords: Partitions, Range Spaces, Sampling, Halfspaces}
}
Document
Nearly-Optimal Mergesorts: Fast, Practical Sorting Methods That Optimally Adapt to Existing Runs

Authors: J. Ian Munro and Sebastian Wild


Abstract
We present two stable mergesort variants, "peeksort" and "powersort", that exploit existing runs and find nearly-optimal merging orders with negligible overhead. Previous methods either require substantial effort for determining the merging order (Takaoka 2009; Barbay & Navarro 2013) or do not have an optimal worst-case guarantee (Peters 2002; Auger, Nicaud & Pivoteau 2015; Buss & Knop 2018) . We demonstrate that our methods are competitive in terms of running time with state-of-the-art implementations of stable sorting methods.

Cite as

J. Ian Munro and Sebastian Wild. Nearly-Optimal Mergesorts: Fast, Practical Sorting Methods That Optimally Adapt to Existing Runs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 63:1-63:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{munro_et_al:LIPIcs.ESA.2018.63,
  author =	{Munro, J. Ian and Wild, Sebastian},
  title =	{{Nearly-Optimal Mergesorts: Fast, Practical Sorting Methods That Optimally Adapt to Existing Runs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{63:1--63:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.63},
  URN =		{urn:nbn:de:0030-drops-95265},
  doi =		{10.4230/LIPIcs.ESA.2018.63},
  annote =	{Keywords: adaptive sorting, nearly-optimal binary search trees, Timsort}
}
Document
On a Problem of Danzer

Authors: Nabil H. Mustafa and Saurabh Ray


Abstract
Let C be a bounded convex object in R^d, and P a set of n points lying outside C. Further let c_p, c_q be two integers with 1 <= c_q <= c_p <= n - floor[d/2], such that every c_p + floor[d/2] points of P contains a subset of size c_q + floor[d/2] whose convex-hull is disjoint from C. Then our main theorem states the existence of a partition of P into a small number of subsets, each of whose convex-hull is disjoint from C. Our proof is constructive and implies that such a partition can be computed in polynomial time. In particular, our general theorem implies polynomial bounds for Hadwiger-Debrunner (p, q) numbers for balls in R^d. For example, it follows from our theorem that when p > q >= (1+beta) * d/2 for beta > 0, then any set of balls satisfying the HD(p,q) property can be hit by O(q^2 p^{1+1/(beta)} log p) points. This is the first improvement over a nearly 60-year old exponential bound of roughly O(2^d). Our results also complement the results obtained in a recent work of Keller et al. where, apart from improvements to the bound on HD(p, q) for convex sets in R^d for various ranges of p and q, a polynomial bound is obtained for regions with low union complexity in the plane.

Cite as

Nabil H. Mustafa and Saurabh Ray. On a Problem of Danzer. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 64:1-64:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{mustafa_et_al:LIPIcs.ESA.2018.64,
  author =	{Mustafa, Nabil H. and Ray, Saurabh},
  title =	{{On a Problem of Danzer}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{64:1--64:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.64},
  URN =		{urn:nbn:de:0030-drops-95271},
  doi =		{10.4230/LIPIcs.ESA.2018.64},
  annote =	{Keywords: Convex polytopes, Hadwiger-Debrunner numbers, Epsilon-nets, Balls}
}
Document
Quasi-Polynomial Time Approximation Schemes for Packing and Covering Problems in Planar Graphs

Authors: Michal Pilipczuk, Erik Jan van Leeuwen, and Andreas Wiese


Abstract
We consider two optimization problems in planar graphs. In {Maximum Weight Independent Set of Objects} we are given a graph G and a family D of {objects}, each being a connected subgraph of G with a prescribed weight, and the task is to find a maximum-weight subfamily of D consisting of pairwise disjoint objects. In {Minimum Weight Distance Set Cover} we are given an edge-weighted graph G, two sets D,C of vertices of G, where vertices of D have prescribed weights, and a nonnegative radius r. The task is to find a minimum-weight subset of D such that every vertex of C is at distance at most r from some selected vertex. Via simple reductions, these two problems generalize a number of geometric optimization tasks, notably {Maximum Weight Independent Set} for polygons in the plane and {Weighted Geometric Set Cover} for unit disks and unit squares. We present {quasi-polynomial time approximation schemes} (QPTASs) for both of the above problems in planar graphs: given an accuracy parameter epsilon>0 we can compute a solution whose weight is within multiplicative factor of (1+epsilon) from the optimum in time 2^{poly(1/epsilon,log |D|)}* n^{O(1)}, where n is the number of vertices of the input graph. Our main technical contribution is to transfer the techniques used for recursive approximation schemes for geometric problems due to Adamaszek, Har-Peled, and Wiese [Adamaszek and Wiese, 2013; Adamaszek and Wiese, 2014; Sariel Har-Peled, 2014] to the setting of planar graphs. In particular, this yields a purely combinatorial viewpoint on these methods.

Cite as

Michal Pilipczuk, Erik Jan van Leeuwen, and Andreas Wiese. Quasi-Polynomial Time Approximation Schemes for Packing and Covering Problems in Planar Graphs. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 65:1-65:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{pilipczuk_et_al:LIPIcs.ESA.2018.65,
  author =	{Pilipczuk, Michal and van Leeuwen, Erik Jan and Wiese, Andreas},
  title =	{{Quasi-Polynomial Time Approximation Schemes for Packing and Covering Problems in Planar Graphs}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{65:1--65:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.65},
  URN =		{urn:nbn:de:0030-drops-95282},
  doi =		{10.4230/LIPIcs.ESA.2018.65},
  annote =	{Keywords: QPTAS, planar graphs, Voronoi diagram}
}
Document
On Learning Linear Functions from Subset and Its Applications in Quantum Computing

Authors: Gábor Ivanyos, Anupam Prakash, and Miklos Santha


Abstract
Let F_{q} be the finite field of size q and let l: F_{q}^{n} -> F_{q} be a linear function. We introduce the Learning From Subset problem LFS(q,n,d) of learning l, given samples u in F_{q}^{n} from a special distribution depending on l: the probability of sampling u is a function of l(u) and is non zero for at most d values of l(u). We provide a randomized algorithm for LFS(q,n,d) with sample complexity (n+d)^{O(d)} and running time polynomial in log q and (n+d)^{O(d)}. Our algorithm generalizes and improves upon previous results [Friedl et al., 2014; Gábor Ivanyos, 2008] that had provided algorithms for LFS(q,n,q-1) with running time (n+q)^{O(q)}. We further present applications of our result to the Hidden Multiple Shift problem HMS(q,n,r) in quantum computation where the goal is to determine the hidden shift s given oracle access to r shifted copies of an injective function f: Z_{q}^{n} -> {0, 1}^{l}, that is we can make queries of the form f_{s}(x,h) = f(x-hs) where h can assume r possible values. We reduce HMS(q,n,r) to LFS(q,n, q-r+1) to obtain a polynomial time algorithm for HMS(q,n,r) when q=n^{O(1)} is prime and q-r=O(1). The best known algorithms [Andrew M. Childs and Wim van Dam, 2007; Friedl et al., 2014] for HMS(q,n,r) with these parameters require exponential time.

Cite as

Gábor Ivanyos, Anupam Prakash, and Miklos Santha. On Learning Linear Functions from Subset and Its Applications in Quantum Computing. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ivanyos_et_al:LIPIcs.ESA.2018.66,
  author =	{Ivanyos, G\'{a}bor and Prakash, Anupam and Santha, Miklos},
  title =	{{On Learning Linear Functions from Subset and Its Applications in Quantum Computing}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.66},
  URN =		{urn:nbn:de:0030-drops-95299},
  doi =		{10.4230/LIPIcs.ESA.2018.66},
  annote =	{Keywords: Learning from subset, hidden shift problem, quantum algorithms, linearization}
}
Document
Strong Collapse for Persistence

Authors: Jean-Daniel Boissonnat, Siddharth Pritam, and Divyansh Pareek


Abstract
We introduce a fast and memory efficient approach to compute the persistent homology (PH) of a sequence of simplicial complexes. The basic idea is to simplify the complexes of the input sequence by using strong collapses, as introduced by J. Barmak and E. Miniam [DCG (2012)], and to compute the PH of an induced sequence of reduced simplicial complexes that has the same PH as the initial one. Our approach has several salient features that distinguishes it from previous work. It is not limited to filtrations (i.e. sequences of nested simplicial subcomplexes) but works for other types of sequences like towers and zigzags. To strong collapse a simplicial complex, we only need to store the maximal simplices of the complex, not the full set of all its simplices, which saves a lot of space and time. Moreover, the complexes in the sequence can be strong collapsed independently and in parallel. As a result and as demonstrated by numerous experiments on publicly available data sets, our approach is extremely fast and memory efficient in practice.

Cite as

Jean-Daniel Boissonnat, Siddharth Pritam, and Divyansh Pareek. Strong Collapse for Persistence. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 67:1-67:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{boissonnat_et_al:LIPIcs.ESA.2018.67,
  author =	{Boissonnat, Jean-Daniel and Pritam, Siddharth and Pareek, Divyansh},
  title =	{{Strong Collapse for Persistence}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{67:1--67:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.67},
  URN =		{urn:nbn:de:0030-drops-95302},
  doi =		{10.4230/LIPIcs.ESA.2018.67},
  annote =	{Keywords: Computational Topology, Topological Data Analysis, Strong Collapse, Persistent homology}
}
Document
On the Complexity of the (Approximate) Nearest Colored Node Problem

Authors: Maximilian Probst


Abstract
Given a graph G=(V,E) where each vertex is assigned a color from the set C={c_1, c_2, .., c_sigma}. In the (approximate) nearest colored node problem, we want to query, given v in V and c in C, for the (approximate) distance dist^(v, c) from v to the nearest node of color c. For any integer 1 <= k <= log n, we present a Color Distance Oracle (also often referred to as Vertex-label Distance Oracle) of stretch 4k-5 using space O(kn sigma^{1/k}) and query time O(log{k}). This improves the query time from O(k) to O(log{k}) over the best known Color Distance Oracle by Chechik [Chechik, 2012]. We then prove a lower bound in the cell probe model showing that even for unweighted undirected paths any static data structure that uses space S requires at least Omega (log (log{sigma} / log(S/n)+log log{n})) query time to give a distance estimate of stretch O(polylog(n)). This implies for the important case when sigma = Theta(n^{epsilon}) for some constant 0 < epsilon < 1, that our Color Distance Oracle has asymptotically optimal query time in regard to k, and that recent Color Distance Oracles for trees [Tsur, 2018] and planar graphs [Mozes and Skop, 2018] achieve asymptotically optimal query time in regard to n. We also investigate the setting where the data structure additionally has to support color-reassignments. We present the first Color Distance Oracle that achieves query times matching our lower bound from the static setting for large stretch yielding an exponential improvement over the best known query time [Chechik, 2014]. Finally, we give new conditional lower bounds proving the hardness of answering queries if edge insertions and deletion are allowed that strictly improve over recent bounds in time and generality.

Cite as

Maximilian Probst. On the Complexity of the (Approximate) Nearest Colored Node Problem. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{probst:LIPIcs.ESA.2018.68,
  author =	{Probst, Maximilian},
  title =	{{On the Complexity of the (Approximate) Nearest Colored Node Problem}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.68},
  URN =		{urn:nbn:de:0030-drops-95313},
  doi =		{10.4230/LIPIcs.ESA.2018.68},
  annote =	{Keywords: Nearest Colored Node, Distance Oracles, Cell-probe lower bounds}
}
Document
Planar Support for Non-piercing Regions and Applications

Authors: Rajiv Raman and Saurabh Ray


Abstract
Given a hypergraph H=(X,S), a planar support for H is a planar graph G with vertex set X, such that for each hyperedge S in S, the sub-graph of G induced by the vertices in S is connected. Planar supports for hypergraphs have found several algorithmic applications, including several packing and covering problems, hypergraph coloring, and in hypergraph visualization. The main result proved in this paper is the following: given two families of regions R and B in the plane, each of which consists of connected, non-piercing regions, the intersection hypergraph H_R(B) = (B, {B_r}_{r in R}), where B_r = {b in B: b cap r != empty set} has a planar support. Further, such a planar support can be computed in time polynomial in |R|, |B|, and the number of vertices in the arrangement of the regions in R cup B. Special cases of this result include the setting where either the family R, or the family B is a set of points. Our result unifies and generalizes several previous results on planar supports, PTASs for packing and covering problems on non-piercing regions in the plane and coloring of intersection hypergraph of non-piercing regions.

Cite as

Rajiv Raman and Saurabh Ray. Planar Support for Non-piercing Regions and Applications. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 69:1-69:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{raman_et_al:LIPIcs.ESA.2018.69,
  author =	{Raman, Rajiv and Ray, Saurabh},
  title =	{{Planar Support for Non-piercing Regions and Applications}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{69:1--69:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.69},
  URN =		{urn:nbn:de:0030-drops-95320},
  doi =		{10.4230/LIPIcs.ESA.2018.69},
  annote =	{Keywords: Geometric optimization, packing and covering, non-piercing regions}
}
Document
An Exact Algorithm for the Steiner Forest Problem

Authors: Daniel R. Schmidt, Bernd Zey, and François Margot


Abstract
The Steiner forest problem asks for a minimum weight forest that spans a given number of terminal sets. The problem has famous linear programming based 2-approximations [Agrawal et al., 1995; Goemans and Williamson, 1995; Jain, 2001] whose bottleneck is the fact that the most natural formulation of the problem as an integer linear program (ILP) has an integrality gap of 2. We propose new cut-based ILP formulations for the problem along with exact branch-and-bound based algorithms. While our new formulations cannot improve the integrality gap, we can prove that one of them yields stronger linear programming bounds than the two previous strongest formulations: The directed cut formulation [Balakrishnan et al., 1989; Chopra and Rao, 1994] and the advanced flow-based formulation by Magnanti and Raghavan [Magnanti and Raghavan, 2005]. In an experimental evaluation, we show that the linear programming bounds of the new formulations are indeed strong on practical instances and that our new branch-and-bound algorithms outperform branch-and-bound algorithms based on the previous formulations. Our formulations can be seen as a cut-based analogon to [Magnanti and Raghavan, 2005], whose existence was an open problem.

Cite as

Daniel R. Schmidt, Bernd Zey, and François Margot. An Exact Algorithm for the Steiner Forest Problem. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 70:1-70:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{schmidt_et_al:LIPIcs.ESA.2018.70,
  author =	{Schmidt, Daniel R. and Zey, Bernd and Margot, Fran\c{c}ois},
  title =	{{An Exact Algorithm for the Steiner Forest Problem}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{70:1--70:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.70},
  URN =		{urn:nbn:de:0030-drops-95339},
  doi =		{10.4230/LIPIcs.ESA.2018.70},
  annote =	{Keywords: branch-and-bound algorithms, Steiner network problems}
}
Document
Large Low-Diameter Graphs are Good Expanders

Authors: Michael Dinitz, Michael Schapira, and Gal Shahaf


Abstract
We revisit the classical question of the relationship between the diameter of a graph and its expansion properties. One direction is well understood: expander graphs exhibit essentially the lowest possible diameter. We focus on the reverse direction, showing that "sufficiently large" graphs of fixed diameter and degree must be "good" expanders. We prove this statement for various definitions of "sufficiently large" (multiplicative/additive factor from the largest possible size), for different forms of expansion (edge, vertex, and spectral expansion), and for both directed and undirected graphs. A recurring theme is that the lower the diameter of the graph and (more importantly) the larger its size, the better the expansion guarantees. Aside from inherent theoretical interest, our motivation stems from the domain of network design. Both low-diameter networks and expanders are prominent approaches to designing high-performance networks in parallel computing, HPC, datacenter networking, and beyond. Our results establish that these two approaches are, in fact, inextricably intertwined. We leave the reader with many intriguing questions for future research.

Cite as

Michael Dinitz, Michael Schapira, and Gal Shahaf. Large Low-Diameter Graphs are Good Expanders. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 71:1-71:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dinitz_et_al:LIPIcs.ESA.2018.71,
  author =	{Dinitz, Michael and Schapira, Michael and Shahaf, Gal},
  title =	{{Large Low-Diameter Graphs are Good Expanders}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{71:1--71:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.71},
  URN =		{urn:nbn:de:0030-drops-95348},
  doi =		{10.4230/LIPIcs.ESA.2018.71},
  annote =	{Keywords: Network design, Expander graphs, Spectral graph theory}
}
Document
Improved Dynamic Graph Coloring

Authors: Shay Solomon and Nicole Wein


Abstract
This paper studies the fundamental problem of graph coloring in fully dynamic graphs. Since the problem of computing an optimal coloring, or even approximating it to within n^{1-epsilon} for any epsilon > 0, is NP-hard in static graphs, there is no hope to achieve any meaningful computational results for general graphs in the dynamic setting. It is therefore only natural to consider the combinatorial aspects of dynamic coloring, or alternatively, study restricted families of graphs. Towards understanding the combinatorial aspects of this problem, one may assume a black-box access to a static algorithm for C-coloring any subgraph of the dynamic graph, and investigate the trade-off between the number of colors and the number of recolorings per update step. Optimizing the number of recolorings, sometimes referred to as the recourse bound, is important for various practical applications. In WADS'17, Barba et al. devised two complementary algorithms: For any beta > 0, the first (respectively, second) maintains an O(C beta n^{1/beta}) (resp., O(C beta))-coloring while recoloring O(beta) (resp., O(beta n^{1/beta})) vertices per update. Barba et al. also showed that the second trade-off appears to exhibit the right behavior, at least for beta = O(1): Any algorithm that maintains a c-coloring of an n-vertex dynamic forest must recolor Omega(n^{2/(c(c-1))}) vertices per update, for any constant c >= 2. Our contribution is two-fold: - We devise a new algorithm for general graphs that improves significantly upon the first trade-off in a wide range of parameters: For any beta > 0, we get a O~(C/(beta)log^2 n)-coloring with O(beta) recolorings per update, where the O~ notation supresses polyloglog(n) factors. In particular, for beta = O(1) we get constant recolorings with polylog(n) colors; not only is this an exponential improvement over the previous bound, but it also unveils a rather surprising phenomenon: The trade-off between the number of colors and recolorings is highly non-symmetric. - For uniformly sparse graphs, we use low out-degree orientations to strengthen the above result by bounding the update time of the algorithm rather than the number of recolorings. Then, we further improve this result by introducing a new data structure that refines bounded out-degree edge orientations and is of independent interest.

Cite as

Shay Solomon and Nicole Wein. Improved Dynamic Graph Coloring. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 72:1-72:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{solomon_et_al:LIPIcs.ESA.2018.72,
  author =	{Solomon, Shay and Wein, Nicole},
  title =	{{Improved Dynamic Graph Coloring}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{72:1--72:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.72},
  URN =		{urn:nbn:de:0030-drops-95357},
  doi =		{10.4230/LIPIcs.ESA.2018.72},
  annote =	{Keywords: coloring, dynamic graph algorithms, graph arboricity, edge orientations}
}
Document
Soft Subdivision Motion Planning for Complex Planar Robots

Authors: Bo Zhou, Yi-Jen Chiang, and Chee Yap


Abstract
The design and implementation of theoretically-sound robot motion planning algorithms is challenging. Within the framework of resolution-exact algorithms, it is possible to exploit soft predicates for collision detection. The design of soft predicates is a balancing act between easily implementable predicates and their accuracy/effectivity. In this paper, we focus on the class of planar polygonal rigid robots with arbitrarily complex geometry. We exploit the remarkable decomposability property of soft collision-detection predicates of such robots. We introduce a general technique to produce such a decomposition. If the robot is an m-gon, the complexity of this approach scales linearly in m. This contrasts with the O(m^3) complexity known for exact planners. It follows that we can now routinely produce soft predicates for any rigid polygonal robot. This results in resolution-exact planners for such robots within the general Soft Subdivision Search (SSS) framework. This is a significant advancement in the theory of sound and complete planners for planar robots. We implemented such decomposed predicates in our open-source Core Library. The experiments show that our algorithms are effective, perform in real time on non-trivial environments, and can outperform many sampling-based methods.

Cite as

Bo Zhou, Yi-Jen Chiang, and Chee Yap. Soft Subdivision Motion Planning for Complex Planar Robots. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 73:1-73:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{zhou_et_al:LIPIcs.ESA.2018.73,
  author =	{Zhou, Bo and Chiang, Yi-Jen and Yap, Chee},
  title =	{{Soft Subdivision Motion Planning for Complex Planar Robots}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{73:1--73:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.73},
  URN =		{urn:nbn:de:0030-drops-95361},
  doi =		{10.4230/LIPIcs.ESA.2018.73},
  annote =	{Keywords: Computational Geometry, Algorithmic Motion Planning, Resolution-Exact Algorithms, Soft Predicates, Planar Robots with Complex Geometry}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail