OASIcs, Volume 69

2nd Symposium on Simplicity in Algorithms (SOSA 2019)



Thumbnail PDF

Event

SOSA 2019, January 8-9, 2019, San Diego, CA, USA

Editors

Jeremy T. Fineman
Michael Mitzenmacher

Publication Details

  • published at: 2019-01-08
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-099-6
  • DBLP: db/conf/sosa/soda2019

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
OASIcs, Volume 69, SOSA'19, Complete Volume

Authors: Jeremy T. Fineman and Michael Mitzenmacher


Abstract
OASIcs, Volume 69, SOSA'19, Complete Volume

Cite as

2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{fineman_et_al:OASIcs.SOSA.2019,
  title =	{{OASIcs, Volume 69, SOSA'19, Complete Volume}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019},
  URN =		{urn:nbn:de:0030-drops-101683},
  doi =		{10.4230/OASIcs.SOSA.2019},
  annote =	{Keywords: Theory of computation, Design and analysis of algorithms}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Jeremy T. Fineman and Michael Mitzenmacher


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{fineman_et_al:OASIcs.SOSA.2019.0,
  author =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{0:i--0:x},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.0},
  URN =		{urn:nbn:de:0030-drops-100263},
  doi =		{10.4230/OASIcs.SOSA.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Isotonic Regression by Dynamic Programming

Authors: Günter Rote


Abstract
For a given sequence of numbers, we want to find a monotonically increasing sequence of the same length that best approximates it in the sense of minimizing the weighted sum of absolute values of the differences. A conceptually easy dynamic programming approach leads to an algorithm with running time O(n log n). While other algorithms with the same running time are known, our algorithm is very simple. The only auxiliary data structure that it requires is a priority queue. The approach extends to other error measures.

Cite as

Günter Rote. Isotonic Regression by Dynamic Programming. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 1:1-1:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rote:OASIcs.SOSA.2019.1,
  author =	{Rote, G\"{u}nter},
  title =	{{Isotonic Regression by Dynamic Programming}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{1:1--1:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.1},
  URN =		{urn:nbn:de:0030-drops-100274},
  doi =		{10.4230/OASIcs.SOSA.2019.1},
  annote =	{Keywords: Convex functions, dynamic programming, convex hull, isotonic regression}
}
Document
An Illuminating Algorithm for the Light Bulb Problem

Authors: Josh Alman


Abstract
The Light Bulb Problem is one of the most basic problems in data analysis. One is given as input n vectors in {-1,1}^d, which are all independently and uniformly random, except for a planted pair of vectors with inner product at least rho * d for some constant rho > 0. The task is to find the planted pair. The most straightforward algorithm leads to a runtime of Omega(n^2). Algorithms based on techniques like Locality-Sensitive Hashing achieve runtimes of n^{2 - O(rho)}; as rho gets small, these approach quadratic. Building on prior work, we give a new algorithm for this problem which runs in time O(n^{1.582} + nd), regardless of how small rho is. This matches the best known runtime due to Karppa et al. Our algorithm combines techniques from previous work on the Light Bulb Problem with the so-called `polynomial method in algorithm design,' and has a simpler analysis than previous work. Our algorithm is also easily derandomized, leading to a deterministic algorithm for the Light Bulb Problem with the same runtime of O(n^{1.582} + nd), improving previous results.

Cite as

Josh Alman. An Illuminating Algorithm for the Light Bulb Problem. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 2:1-2:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:OASIcs.SOSA.2019.2,
  author =	{Alman, Josh},
  title =	{{An Illuminating Algorithm for the Light Bulb Problem}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{2:1--2:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.2},
  URN =		{urn:nbn:de:0030-drops-100289},
  doi =		{10.4230/OASIcs.SOSA.2019.2},
  annote =	{Keywords: Light Bulb Problem, Polynomial Method, Finding Correlations}
}
Document
Simple Concurrent Labeling Algorithms for Connected Components

Authors: Sixue Liu and Robert E. Tarjan


Abstract
We present new concurrent labeling algorithms for finding connected components, and we study their theoretical efficiency. Even though many such algorithms have been proposed and many experiments with them have been done, our algorithms are simpler. We obtain an O(lg n) step bound for two of our algorithms using a novel multi-round analysis. We conjecture that our other algorithms also take O(lg n) steps but are only able to prove an O(lg^2 n) bound. We also point out some gaps in previous analyses of similar algorithms. Our results show that even a basic problem like connected components still has secrets to reveal.

Cite as

Sixue Liu and Robert E. Tarjan. Simple Concurrent Labeling Algorithms for Connected Components. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:OASIcs.SOSA.2019.3,
  author =	{Liu, Sixue and Tarjan, Robert E.},
  title =	{{Simple Concurrent Labeling Algorithms for Connected Components}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{3:1--3:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.3},
  URN =		{urn:nbn:de:0030-drops-100292},
  doi =		{10.4230/OASIcs.SOSA.2019.3},
  annote =	{Keywords: Connected Components, Concurrent Algorithms}
}
Document
A Framework for Searching in Graphs in the Presence of Errors

Authors: Dariusz Dereniowski, Stefan Tiegel, Przemyslaw Uznanski, and Daniel Wolleb-Graf


Abstract
We consider a problem of searching for an unknown target vertex t in a (possibly edge-weighted) graph. Each vertex-query points to a vertex v and the response either admits that v is the target or provides any neighbor s of v that lies on a shortest path from v to t. This model has been introduced for trees by Onak and Parys [FOCS 2006] and for general graphs by Emamjomeh-Zadeh et al. [STOC 2016]. In the latter, the authors provide algorithms for the error-less case and for the independent noise model (where each query independently receives an erroneous answer with known probability p<1/2 and a correct one with probability 1-p). We study this problem both with adversarial errors and independent noise models. First, we show an algorithm that needs at most (log_2 n)/(1 - H(r)) queries in case of adversarial errors, where the adversary is bounded with its rate of errors by a known constant r<1/2. Our algorithm is in fact a simplification of previous work, and our refinement lies in invoking an amortization argument. We then show that our algorithm coupled with a Chernoff bound argument leads to a simpler algorithm for the independent noise model and has a query complexity that is both simpler and asymptotically better than the one of Emamjomeh-Zadeh et al. [STOC 2016]. Our approach has a wide range of applications. First, it improves and simplifies the Robust Interactive Learning framework proposed by Emamjomeh-Zadeh and Kempe [NIPS 2017]. Secondly, performing analogous analysis for edge-queries (where a query to an edge e returns its endpoint that is closer to the target) we actually recover (as a special case) a noisy binary search algorithm that is asymptotically optimal, matching the complexity of Feige et al. [SIAM J. Comput. 1994]. Thirdly, we improve and simplify upon an algorithm for searching of unbounded domains due to Aslam and Dhagat [STOC 1991].

Cite as

Dariusz Dereniowski, Stefan Tiegel, Przemyslaw Uznanski, and Daniel Wolleb-Graf. A Framework for Searching in Graphs in the Presence of Errors. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dereniowski_et_al:OASIcs.SOSA.2019.4,
  author =	{Dereniowski, Dariusz and Tiegel, Stefan and Uznanski, Przemyslaw and Wolleb-Graf, Daniel},
  title =	{{A Framework for Searching in Graphs in the Presence of Errors}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{4:1--4:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.4},
  URN =		{urn:nbn:de:0030-drops-100305},
  doi =		{10.4230/OASIcs.SOSA.2019.4},
  annote =	{Keywords: graph algorithms, noisy binary search, query complexity, reliability}
}
Document
Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps

Authors: Haim Kaplan, László Kozma, Or Zamir, and Uri Zwick


Abstract
We use soft heaps to obtain simpler optimal algorithms for selecting the k-th smallest item, and the set of k smallest items, from a heap-ordered tree, from a collection of sorted lists, and from X+Y, where X and Y are two unsorted sets. Our results match, and in some ways extend and improve, classical results of Frederickson (1993) and Frederickson and Johnson (1982). In particular, for selecting the k-th smallest item, or the set of k smallest items, from a collection of m sorted lists we obtain a new optimal "output-sensitive" algorithm that performs only O(m + sum_{i=1}^m log(k_i+1)) comparisons, where k_i is the number of items of the i-th list that belong to the overall set of k smallest items.

Cite as

Haim Kaplan, László Kozma, Or Zamir, and Uri Zwick. Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:OASIcs.SOSA.2019.5,
  author =	{Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Zamir, Or and Zwick, Uri},
  title =	{{Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{5:1--5:21},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.5},
  URN =		{urn:nbn:de:0030-drops-100315},
  doi =		{10.4230/OASIcs.SOSA.2019.5},
  annote =	{Keywords: selection, soft heap}
}
Document
Approximating Optimal Transport With Linear Programs

Authors: Kent Quanrud


Abstract
In the regime of bounded transportation costs, additive approximations for the optimal transport problem are reduced (rather simply) to relative approximations for positive linear programs, resulting in faster additive approximation algorithms for optimal transport.

Cite as

Kent Quanrud. Approximating Optimal Transport With Linear Programs. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 6:1-6:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{quanrud:OASIcs.SOSA.2019.6,
  author =	{Quanrud, Kent},
  title =	{{Approximating Optimal Transport With Linear Programs}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{6:1--6:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.6},
  URN =		{urn:nbn:de:0030-drops-100321},
  doi =		{10.4230/OASIcs.SOSA.2019.6},
  annote =	{Keywords: optimal transport, fast approximations, linear programming}
}
Document
LP Relaxation and Tree Packing for Minimum k-cuts

Authors: Chandra Chekuri, Kent Quanrud, and Chao Xu


Abstract
Karger used spanning tree packings [Karger, 2000] to derive a near linear-time randomized algorithm for the global minimum cut problem as well as a bound on the number of approximate minimum cuts. This is a different approach from his well-known random contraction algorithm [Karger, 1995; Karger and Stein, 1996]. Thorup developed a fast deterministic algorithm for the minimum k-cut problem via greedy recursive tree packings [Thorup, 2008]. In this paper we revisit properties of an LP relaxation for k-cut proposed by Naor and Rabani [Naor and Rabani, 2001], and analyzed in [Chekuri et al., 2006]. We show that the dual of the LP yields a tree packing, that when combined with an upper bound on the integrality gap for the LP, easily and transparently extends Karger's analysis for mincut to the k-cut problem. In addition to the simplicity of the algorithm and its analysis, this allows us to improve the running time of Thorup's algorithm by a factor of n. We also improve the bound on the number of alpha-approximate k-cuts. Second, we give a simple proof that the integrality gap of the LP is 2(1-1/n). Third, we show that an optimum solution to the LP relaxation, for all values of k, is fully determined by the principal sequence of partitions of the input graph. This allows us to relate the LP relaxation to the Lagrangean relaxation approach of Barahona [Barahona, 2000] and Ravi and Sinha [Ravi and Sinha, 2008]; it also shows that the idealized recursive tree packing considered by Thorup gives an optimum dual solution to the LP. This work arose from an effort to understand and simplify the results of Thorup [Thorup, 2008].

Cite as

Chandra Chekuri, Kent Quanrud, and Chao Xu. LP Relaxation and Tree Packing for Minimum k-cuts. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chekuri_et_al:OASIcs.SOSA.2019.7,
  author =	{Chekuri, Chandra and Quanrud, Kent and Xu, Chao},
  title =	{{LP Relaxation and Tree Packing for Minimum k-cuts}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{7:1--7:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.7},
  URN =		{urn:nbn:de:0030-drops-100335},
  doi =		{10.4230/OASIcs.SOSA.2019.7},
  annote =	{Keywords: k-cut, LP relaxation, tree packing}
}
Document
On Primal-Dual Circle Representations

Authors: Stefan Felsner and Günter Rote


Abstract
The Koebe-Andreev-Thurston Circle Packing Theorem states that every triangulated planar graph has a contact representation by circles. The theorem has been generalized in various ways. The most prominent generalization assures the existence of a primal-dual circle representation for every 3-connected planar graph. We present a simple and elegant elementary proof of this result.

Cite as

Stefan Felsner and Günter Rote. On Primal-Dual Circle Representations. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{felsner_et_al:OASIcs.SOSA.2019.8,
  author =	{Felsner, Stefan and Rote, G\"{u}nter},
  title =	{{On Primal-Dual Circle Representations}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{8:1--8:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.8},
  URN =		{urn:nbn:de:0030-drops-100349},
  doi =		{10.4230/OASIcs.SOSA.2019.8},
  annote =	{Keywords: Disk packing, planar graphs, contact representation}
}
Document
Asymmetric Convex Intersection Testing

Authors: Luis Barba and Wolfgang Mulzer


Abstract
We consider asymmetric convex intersection testing (ACIT). Let P subset R^d be a set of n points and H a set of n halfspaces in d dimensions. We denote by {ch(P)} the polytope obtained by taking the convex hull of P, and by {fh(H)} the polytope obtained by taking the intersection of the halfspaces in H. Our goal is to decide whether the intersection of H and the convex hull of P are disjoint. Even though ACIT is a natural variant of classic LP-type problems that have been studied at length in the literature, and despite its applications in the analysis of high-dimensional data sets, it appears that the problem has not been studied before. We discuss how known approaches can be used to attack the ACIT problem, and we provide a very simple strategy that leads to a deterministic algorithm, linear on n and m, whose running time depends reasonably on the dimension d.

Cite as

Luis Barba and Wolfgang Mulzer. Asymmetric Convex Intersection Testing. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{barba_et_al:OASIcs.SOSA.2019.9,
  author =	{Barba, Luis and Mulzer, Wolfgang},
  title =	{{Asymmetric Convex Intersection Testing}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{9:1--9:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.9},
  URN =		{urn:nbn:de:0030-drops-100358},
  doi =		{10.4230/OASIcs.SOSA.2019.9},
  annote =	{Keywords: polytope intersection, LP-type problem, randomized algorithm}
}
Document
Relaxed Voronoi: A Simple Framework for Terminal-Clustering Problems

Authors: Arnold Filtser, Robert Krauthgamer, and Ohad Trabelsi


Abstract
We reprove three known algorithmic bounds for terminal-clustering problems, using a single framework that leads to simpler proofs. In this genre of problems, the input is a metric space (X,d) (possibly arising from a graph) and a subset of terminals K subset X, and the goal is to partition the points X such that each part, called a cluster, contains exactly one terminal (possibly with connectivity requirements) so as to minimize some objective. The three bounds we reprove are for Steiner Point Removal on trees [Gupta, SODA 2001], for Metric 0-Extension in bounded doubling dimension [Lee and Naor, unpublished 2003], and for Connected Metric 0-Extension [Englert et al., SICOMP 2014]. A natural approach is to cluster each point with its closest terminal, which would partition X into so-called Voronoi cells, but this approach can fail miserably due to its stringent cluster boundaries. A now-standard fix, which we call the Relaxed-Voronoi framework, is to use enlarged Voronoi cells, but to obtain disjoint clusters, the cells are computed greedily according to some order. This method, first proposed by Calinescu, Karloff and Rabani [SICOMP 2004], was employed successfully to provide state-of-the-art results for terminal-clustering problems on general metrics. However, for restricted families of metrics, e.g., trees and doubling metrics, only more complicated, ad-hoc algorithms are known. Our main contribution is to demonstrate that the Relaxed-Voronoi algorithm is applicable to restricted metrics, and actually leads to relatively simple algorithms and analyses.

Cite as

Arnold Filtser, Robert Krauthgamer, and Ohad Trabelsi. Relaxed Voronoi: A Simple Framework for Terminal-Clustering Problems. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{filtser_et_al:OASIcs.SOSA.2019.10,
  author =	{Filtser, Arnold and Krauthgamer, Robert and Trabelsi, Ohad},
  title =	{{Relaxed Voronoi: A Simple Framework for Terminal-Clustering Problems}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{10:1--10:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.10},
  URN =		{urn:nbn:de:0030-drops-100369},
  doi =		{10.4230/OASIcs.SOSA.2019.10},
  annote =	{Keywords: Clustering, Steiner point removal, Zero extension, Doubling dimension, Relaxed voronoi}
}
Document
Towards a Unified Theory of Sparsification for Matching Problems

Authors: Sepehr Assadi and Aaron Bernstein


Abstract
In this paper, we present a construction of a "matching sparsifier", that is, a sparse subgraph of the given graph that preserves large matchings approximately and is robust to modifications of the graph. We use this matching sparsifier to obtain several new algorithmic results for the maximum matching problem: - An almost (3/2)-approximation one-way communication protocol for the maximum matching problem, significantly simplifying the (3/2)-approximation protocol of Goel, Kapralov, and Khanna (SODA 2012) and extending it from bipartite graphs to general graphs. - An almost (3/2)-approximation algorithm for the stochastic matching problem, improving upon and significantly simplifying the previous 1.999-approximation algorithm of Assadi, Khanna, and Li (EC 2017). - An almost (3/2)-approximation algorithm for the fault-tolerant matching problem, which, to our knowledge, is the first non-trivial algorithm for this problem. Our matching sparsifier is obtained by proving new properties of the edge-degree constrained subgraph (EDCS) of Bernstein and Stein (ICALP 2015; SODA 2016) - designed in the context of maintaining matchings in dynamic graphs - that identifies EDCS as an excellent choice for a matching sparsifier. This leads to surprisingly simple and non-technical proofs of the above results in a unified way. Along the way, we also provide a much simpler proof of the fact that an EDCS is guaranteed to contain a large matching, which may be of independent interest.

Cite as

Sepehr Assadi and Aaron Bernstein. Towards a Unified Theory of Sparsification for Matching Problems. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:OASIcs.SOSA.2019.11,
  author =	{Assadi, Sepehr and Bernstein, Aaron},
  title =	{{Towards a Unified Theory of Sparsification for Matching Problems}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{11:1--11:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.11},
  URN =		{urn:nbn:de:0030-drops-100370},
  doi =		{10.4230/OASIcs.SOSA.2019.11},
  annote =	{Keywords: Maximum matching, matching sparsifiers, one-way communication complexity, stochastic matching, fault-tolerant matching}
}
Document
A New Application of Orthogonal Range Searching for Computing Giant Graph Diameters

Authors: Guillaume Ducoffe


Abstract
A well-known problem for which it is difficult to improve the textbook algorithm is computing the graph diameter. We present two versions of a simple algorithm (one being Monte Carlo and the other deterministic) that for every fixed h and unweighted undirected graph G with n vertices and m edges, either correctly concludes that diam(G) < hn or outputs diam(G), in time O(m+n^{1+o(1)}). The algorithm combines a simple randomized strategy for this problem (Damaschke, IWOCA'16) with a popular framework for computing graph distances that is based on range trees (Cabello and Knauer, Computational Geometry'09). We also prove that under the Strong Exponential Time Hypothesis (SETH), we cannot compute the diameter of a given n-vertex graph in truly subquadratic time, even if the diameter is an Theta(n/log{n}).

Cite as

Guillaume Ducoffe. A New Application of Orthogonal Range Searching for Computing Giant Graph Diameters. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 12:1-12:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ducoffe:OASIcs.SOSA.2019.12,
  author =	{Ducoffe, Guillaume},
  title =	{{A New Application of Orthogonal Range Searching for Computing Giant Graph Diameters}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{12:1--12:7},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.12},
  URN =		{urn:nbn:de:0030-drops-100383},
  doi =		{10.4230/OASIcs.SOSA.2019.12},
  annote =	{Keywords: Graph diameter, Orthogonal Range Queries, Hardness in P, FPT in P}
}
Document
Simplified and Space-Optimal Semi-Streaming (2+epsilon)-Approximate Matching

Authors: Mohsen Ghaffari and David Wajc


Abstract
In a recent breakthrough, Paz and Schwartzman (SODA'17) presented a single-pass (2+epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. Their algorithm uses O(n log^2 n) bits of space, for any constant epsilon>0. We present a simplified and more intuitive primal-dual analysis, for essentially the same algorithm, which also improves the space complexity to the optimal bound of O(n log n) bits - this is optimal as the output matching requires Omega(n log n) bits.

Cite as

Mohsen Ghaffari and David Wajc. Simplified and Space-Optimal Semi-Streaming (2+epsilon)-Approximate Matching. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 13:1-13:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ghaffari_et_al:OASIcs.SOSA.2019.13,
  author =	{Ghaffari, Mohsen and Wajc, David},
  title =	{{Simplified and Space-Optimal Semi-Streaming (2+epsilon)-Approximate Matching}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{13:1--13:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.13},
  URN =		{urn:nbn:de:0030-drops-100396},
  doi =		{10.4230/OASIcs.SOSA.2019.13},
  annote =	{Keywords: Streaming, Semi-Streaming, Space-Optimal, Matching}
}
Document
Simple Greedy 2-Approximation Algorithm for the Maximum Genus of a Graph

Authors: Michal Kotrbcík and Martin Skoviera


Abstract
The maximum genus gamma_M(G) of a graph G is the largest genus of an orientable surface into which G has a cellular embedding. Combinatorially, it coincides with the maximum number of disjoint pairs of adjacent edges of G whose removal results in a connected spanning subgraph of G. In this paper we describe a greedy 2-approximation algorithm for maximum genus by proving that removing pairs of adjacent edges from G arbitrarily while retaining connectedness leads to at least gamma_M(G)/2 pairs of edges removed. As a consequence of our approach we also obtain a 2-approximate counterpart of Xuong's combinatorial characterisation of maximum genus.

Cite as

Michal Kotrbcík and Martin Skoviera. Simple Greedy 2-Approximation Algorithm for the Maximum Genus of a Graph. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 14:1-14:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kotrbcik_et_al:OASIcs.SOSA.2019.14,
  author =	{Kotrbc{\'\i}k, Michal and Skoviera, Martin},
  title =	{{Simple Greedy 2-Approximation Algorithm for the Maximum Genus of a Graph}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{14:1--14:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.14},
  URN =		{urn:nbn:de:0030-drops-100409},
  doi =		{10.4230/OASIcs.SOSA.2019.14},
  annote =	{Keywords: maximum genus, embedding, graph, greedy algorithm}
}
Document
A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation

Authors: Pasin Manurangsi


Abstract
In Maximum k-Vertex Cover (Max k-VC), the input is an edge-weighted graph G and an integer k, and the goal is to find a subset S of k vertices that maximizes the total weight of edges covered by S. Here we say that an edge is covered by S iff at least one of its endpoints lies in S. We present an FPT approximation scheme (FPT-AS) that runs in (1/epsilon)^{O(k)} poly(n) time for the problem, which improves upon Gupta, Lee and Li's (k/epsilon)^{O(k)} poly(n)-time FPT-AS [Anupam Gupta and, 2018; Anupam Gupta et al., 2018]. Our algorithm is simple: just use brute force to find the best k-vertex subset among the O(k/epsilon) vertices with maximum weighted degrees. Our algorithm naturally yields an (efficient) approximate kernelization scheme of O(k/epsilon) vertices; previously, an O(k^5/epsilon^2)-vertex approximate kernel is only known for the unweighted version of Max k-VC [Daniel Lokshtanov and, 2017]. Interestingly, this also has an application outside of parameterized complexity: using our approximate kernelization as a preprocessing step, we can directly apply Raghavendra and Tan's SDP-based algorithm for 2SAT with cardinality constraint [Prasad Raghavendra and, 2012] to give an 0.92-approximation algorithm for Max k-VC in polynomial time. This improves upon the best known polynomial time approximation algorithm of Feige and Langberg [Uriel Feige and, 2001] which yields (0.75 + delta)-approximation for some (small and unspecified) constant delta > 0. We also consider the minimization version of the problem (called Min k-VC), where the goal is to find a set S of k vertices that minimizes the total weight of edges covered by S. We provide a FPT-AS for Min k-VC with similar running time of (1/epsilon)^{O(k)} poly(n). Once again, this improves on a (k/epsilon)^{O(k)} poly(n)-time FPT-AS of Gupta et al. On the other hand, we show, assuming a variant of the Small Set Expansion Hypothesis [Raghavendra and Steurer, 2010] and NP !subseteq coNP/poly, that there is no polynomial size approximate kernelization for Min k-VC for any factor less than two.

Cite as

Pasin Manurangsi. A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 15:1-15:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{manurangsi:OASIcs.SOSA.2019.15,
  author =	{Manurangsi, Pasin},
  title =	{{A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{15:1--15:21},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.15},
  URN =		{urn:nbn:de:0030-drops-100417},
  doi =		{10.4230/OASIcs.SOSA.2019.15},
  annote =	{Keywords: Maximum k-Vertex Cover, Minimum k-Vertex Cover, Approximation Algorithms, Fixed Parameter Algorithms, Approximate Kernelization}
}
Document
Simple Contention Resolution via Multiplicative Weight Updates

Authors: Yi-Jun Chang, Wenyu Jin, and Seth Pettie


Abstract
We consider the classic contention resolution problem, in which devices conspire to share some common resource, for which they each need temporary and exclusive access. To ground the discussion, suppose (identical) devices wake up at various times, and must send a single packet over a shared multiple-access channel. In each time step they may attempt to send their packet; they receive ternary feedback {0,1,2^+} from the channel, 0 indicating silence (no one attempted transmission), 1 indicating success (one device successfully transmitted), and 2^+ indicating noise. We prove that a simple strategy suffices to achieve a channel utilization rate of 1/e-O(epsilon), for any epsilon>0. In each step, device i attempts to send its packet with probability p_i, then applies a rudimentary multiplicative weight-type update to p_i. p_i <- { p_i * e^{epsilon} upon hearing silence (0), p_i upon hearing success (1), p_i * e^{-epsilon/(e-2)} upon hearing noise (2^+) }. This scheme works well even if the introduction of devices/packets is adversarial, and even if the adversary can jam time slots (make noise) at will. We prove that if the adversary jams J time slots, then this scheme will achieve channel utilization 1/e-epsilon, excluding O(J) wasted slots. Results similar to these (Bender, Fineman, Gilbert, Young, SODA 2016) were already achieved, but with a lower constant efficiency (less than 0.05) and a more complex algorithm.

Cite as

Yi-Jun Chang, Wenyu Jin, and Seth Pettie. Simple Contention Resolution via Multiplicative Weight Updates. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:OASIcs.SOSA.2019.16,
  author =	{Chang, Yi-Jun and Jin, Wenyu and Pettie, Seth},
  title =	{{Simple Contention Resolution via Multiplicative Weight Updates}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{16:1--16:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.16},
  URN =		{urn:nbn:de:0030-drops-100426},
  doi =		{10.4230/OASIcs.SOSA.2019.16},
  annote =	{Keywords: Contention resolution, multiplicative weight update method}
}
Document
A Simple Near-Linear Pseudopolynomial Time Randomized Algorithm for Subset Sum

Authors: Ce Jin and Hongxun Wu


Abstract
Given a multiset S of n positive integers and a target integer t, the Subset Sum problem asks to determine whether there exists a subset of S that sums up to t. The current best deterministic algorithm, by Koiliaris and Xu [SODA'17], runs in O~(sqrt{n}t) time, where O~ hides poly-logarithm factors. Bringmann [SODA'17] later gave a randomized O~(n + t) time algorithm using two-stage color-coding. The O~(n+t) running time is believed to be near-optimal. In this paper, we present a simple and elegant randomized algorithm for Subset Sum in O~(n + t) time. Our new algorithm actually solves its counting version modulo prime p>t, by manipulating generating functions using FFT.

Cite as

Ce Jin and Hongxun Wu. A Simple Near-Linear Pseudopolynomial Time Randomized Algorithm for Subset Sum. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 17:1-17:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:OASIcs.SOSA.2019.17,
  author =	{Jin, Ce and Wu, Hongxun},
  title =	{{A Simple Near-Linear Pseudopolynomial Time Randomized Algorithm for Subset Sum}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{17:1--17:6},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.17},
  URN =		{urn:nbn:de:0030-drops-100436},
  doi =		{10.4230/OASIcs.SOSA.2019.17},
  annote =	{Keywords: subset sum, formal power series, FFT}
}
Document
Submodular Optimization in the MapReduce Model

Authors: Paul Liu and Jan Vondrak


Abstract
Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a (1/2-o(1))-approximation in 2 MapReduce rounds, and the second is a (1-1/e-epsilon)-approximation in (1+o(1))/epsilon MapReduce rounds.

Cite as

Paul Liu and Jan Vondrak. Submodular Optimization in the MapReduce Model. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 18:1-18:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:OASIcs.SOSA.2019.18,
  author =	{Liu, Paul and Vondrak, Jan},
  title =	{{Submodular Optimization in the MapReduce Model}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{18:1--18:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.18},
  URN =		{urn:nbn:de:0030-drops-100447},
  doi =		{10.4230/OASIcs.SOSA.2019.18},
  annote =	{Keywords: mapreduce, submodular, optimization, approximation algorithms}
}
Document
Compressed Sensing with Adversarial Sparse Noise via L1 Regression

Authors: Sushrut Karmalkar and Eric Price


Abstract
We present a simple and effective algorithm for the problem of sparse robust linear regression. In this problem, one would like to estimate a sparse vector w^* in R^n from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen eta fraction of measured responses y, as well as introduce bounded norm noise to the responses. For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate w^* for any eta < eta_0 ~~ 0.239, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is O(k log n/k) for k-sparse estimation, which is within constant factors of the number needed without any sparse noise. Of the three properties we show - the ability to estimate sparse, as well as dense, w^*; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise - to the best of our knowledge, no previous polynomial time algorithm was known to achieve more than two.

Cite as

Sushrut Karmalkar and Eric Price. Compressed Sensing with Adversarial Sparse Noise via L1 Regression. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 19:1-19:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{karmalkar_et_al:OASIcs.SOSA.2019.19,
  author =	{Karmalkar, Sushrut and Price, Eric},
  title =	{{Compressed Sensing with Adversarial Sparse Noise via L1 Regression}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{19:1--19:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.19},
  URN =		{urn:nbn:de:0030-drops-100455},
  doi =		{10.4230/OASIcs.SOSA.2019.19},
  annote =	{Keywords: Robust Regression, Compressed Sensing}
}
Document
Approximating Maximin Share Allocations

Authors: Jugal Garg, Peter McGlaughlin, and Setareh Taki


Abstract
We study the problem of fair allocation of M indivisible items among N agents using the popular notion of maximin share as our measure of fairness. The maximin share of an agent is the largest value she can guarantee herself if she is allowed to choose a partition of the items into N bundles (one for each agent), on the condition that she receives her least preferred bundle. A maximin share allocation provides each agent a bundle worth at least their maximin share. While it is known that such an allocation need not exist [Procaccia and Wang, 2014; Kurokawa et al., 2016], a series of work [Procaccia and Wang, 2014; David Kurokawa et al., 2018; Amanatidis et al., 2017; Barman and Krishna Murthy, 2017] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times their maximin share. Recently, [Ghodsi et al., 2018] improved the approximation guarantee to 3/4. Prior works utilize intricate algorithms, with an exception of [Barman and Krishna Murthy, 2017] which is a simple greedy solution but relies on sophisticated analysis techniques. In this paper, we propose an alternative 2/3 maximin share approximation which offers both a simple algorithm and straightforward analysis. In contrast to other algorithms, our approach allows for a simple and intuitive understanding of why it works.

Cite as

Jugal Garg, Peter McGlaughlin, and Setareh Taki. Approximating Maximin Share Allocations. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 20:1-20:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:OASIcs.SOSA.2019.20,
  author =	{Garg, Jugal and McGlaughlin, Peter and Taki, Setareh},
  title =	{{Approximating Maximin Share Allocations}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{20:1--20:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.20},
  URN =		{urn:nbn:de:0030-drops-100465},
  doi =		{10.4230/OASIcs.SOSA.2019.20},
  annote =	{Keywords: Fair division, Maximin share, Approximation algorithm}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail