Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082010-06-08http://www.dagstuhl.de/rss.phpFront Matter, Table of Contents, Preface, External Reviewers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8208
Front Matter, Table of Contents, Preface, External ReviewersMon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8208Succinct Color Searching in One Dimension
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8209
In this paper we study succinct data structures for one-dimensional color reporting and color counting problems. We are given a set of n points with integer coordinates in the range [1,m] and every point is assigned a color from the set {1,...\sigma}. A color reporting query asks for the list of distinct colors that occur in a query interval [a,b] and a color counting query asks for the number of distinct colors in [a,b]. We describe a succinct data structure that answers approximate color counting queries in O(1) time and uses \mathcal{B}(n,m) + O(n) + o(\mathcal{B}(n,m)) bits, where \mathcal{B}(n,m) is the minimum number of bits required to represent an arbitrary set of size n from a universe of m elements. Thus we show, somewhat counterintuitively, that it is not necessary to store colors of points in order to answer approximate color counting queries. In the special case when points are in the rank space (i.e., when n=m), our data structure needs only O(n) bits.Also, we show that \Omega(n) bits are necessary in that case.Then we turn to succinct data structures for color reporting. We describe a data structure that uses \mathcal{B}(n,m) + nH_d(S) + o(\mathcal{B}(n,m)) + o(n\lg\sigma) bits and answers queries in O(k+1) time, where k is the number of colors in the answer, and nH_d(S) (d=\log_\sigma n) is the d-th order empirical entropy of the color sequence. Finally, we consider succinct color reporting under restricted updates. Our dynamic data structure uses nH_d(S)+o(n\lg\sigma) bits and supports queries in O(k+1) time.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8209Faster DBScan and HDBScan in Low-Dimensional Euclidean Spaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8210
We present a new algorithm for the widely used density-based clustering method DBScan. Our algorithm computes the DBScan-clustering in O(n log n) time in R^2, irrespective of the scale parameter \eps, but assuming the second parameter MinPts is set to a fixed constant, as is the case in practice.We also present an O(n log n) randomized algorithm for HDBScan in the plane---HDBScans is a hierarchical version of DBScan introduced recently---and we show how to compute an approximate version of HDBScan in near-linear time in any fixed dimension.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8210Maximizing the Strong Triadic Closure in Split Graphs and Proper Interval Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8211
In social networks the Strong Triadic Closure is an assignment of the edges with strong or weak labels such that any two vertices that have a common neighbor with a strong edge are adjacent. The problem of maximizing the number of strong edges that satisfy the strong triadic closure was recently shown to be NP-complete for general graphs. Here we initiate the study of graph classes for which the problem is solvable. We show that the problem admits a polynomial-time algorithm for two unrelated classes of graphs: proper interval graphs and trivially-perfect graphs. To complement our result, we show that the problem remains NP-complete on split graphs, and consequently also on chordal graphs. Thus we contribute to define the first border between graph classes on which the problem is polynomially solvable and on which it remains NP-complete.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8211A (1.4 + epsilon)-Approximation Algorithm for the 2-Max-Duo Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8212
The maximum duo-preservation string mapping (Max-Duo) problem isthe complement of the well studied minimum common string partition (MCSP) problem, both of which have applications in many fields including text compression and bioinformatics. k-Max-Duo is the restricted version of Max-Duo, where every letter of the alphabet occurs at most k times in each of the strings, which is readily reduced into the well known maximum independent set (MIS) problem on a graph of maximum degree \Delta \le 6(k-1). In particular, 2-Max-Duo can then be approximated arbitrarily close to 1.8 using the state-of-the-art approximation algorithm for the MIS problem. 2-Max-Duo was proved APX-hard and very recently a (1.6 + \epsilon)-approximation was claimed, for any \epsilon > 0. In this paper, we present a vertex-degree reduction technique, based on which, we show that 2-Max-Duo can be approximated arbitrarily close to 1.4.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8212Shortcuts for the Circle
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8213
Let C be the unit circle in R^2. We can view C as a plane graph whose vertices are all the points on C, and the distance between any two points on C is the length of the smaller arc between them. We consider a graph augmentation problem on C, where we want to place k >= 1 shortcuts on C such that the diameter of the resulting graph is minimized.We analyze for each k with 1 <= k <= 7 what the optimal set of shortcuts is. Interestingly, the minimum diameter one can obtain is not a strictly decreasing function of k. For example, with seven shortcuts one cannot obtain a smaller diameter than with six shortcuts. Finally, we prove that the optimal diameter is 2 + Theta(1/k^(2/3)) for any k.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8213Placing your Coins on a Shelf
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8214
We consider the problem of packing a family of disks 'on a shelf,'that is, such that each disk touches the x-axis from above and such that no two disks overlap. We prove that the problem of minimizing the distance between the leftmost point and the rightmost point of any disk is NP-hard. On the positive side, we show how to approximate this problem within a factor of 4/3 in O(n log n) time, and provide an O(n log n)-time exact algorithm for a special case, in particular when the ratio between the largest and smallest radius is at most four.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8214Voronoi Diagrams for Parallel Halflines and Line Segments in Space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8215
We consider the Euclidean Voronoi diagram for a set of $n$ parallel halflines in 3-space.A relation of this diagram to planar power diagrams is shown, and is used toanalyze its geometric and topological properties. Moreover, an easy-to-implementspace sweep algorithm is proposed that computes the Voronoi diagram for parallel halflinesat logarithmic cost per face. Previously only an approximation algorithm for this problem was known.Our method of construction generalizes to Voronoi diagrams for parallel line segments, and to higher dimensions.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8215Conflict-Free Coloring of Intersection Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8216
A conflict-free k-coloring of a graph G=(V,E) assigns one of k different colors to some of the vertices such that, for every vertex v, there is a color that is assigned to exactly one vertex among v and v's neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well studied in graph theory.Here we study the conflict-free coloring of geometric intersection graphs. We demonstrate that the intersection graph of n geometric objects without fatness properties and size restrictions may have conflict-free chromatic number in \Omega(log n/log log n) and in \Omega(\sqrt{\log n}) for disks or squares of different sizes; it is known for general graphs that the worst case is in \Theta(log^2 n). For unit-disk intersection graphs, we prove that it is NP-completeto decide the existence of a conflict-free coloringwith one color; we also show that six colors always suffice,using an algorithm that colors unit disk graphs of restricted height with two colors. We conjecture that four colors are sufficient, which we prove for unit squares instead of unit disks.For interval graphs, we establish a tight worst-case bound of two.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8216Maximum Induced Matching Algorithms via Vertex Ordering Characterizations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8217
We study the maximum induced matching problem on a graph G. Induced matchings correspond to independent sets in L^2(G), the square of the line graph of G. The problem is NP-complete on bipartite graphs. In this work, we show that for a number of graph families with forbidden vertex orderings, almost all forbidden patterns on three vertices are preserved when taking the square of the line graph. These orderings can be computed in linear time in the size of the input graph.In particular, given a graph class \mathcal{G} characterized by a vertex ordering, and a graph G=(V,E) \in \mathcal{G} with a corresponding vertex ordering \sigma of V, one can produce (in linear time in the size of G) an ordering on the vertices of L^2(G), that shows that L^2(G) \in \mathcal{G} - for a number of graph classes \mathcal{G} - without computing the line graph or the square of the line graph of G. These results generalize and unify previous ones on showing closure under L^2(\cdot) for various graph families. Furthermore, these orderings on L^2(G) can be exploited algorithmically to compute a maximum induced matching on G faster. We illustrate this latter fact in the second half of the paper where we focus on cocomparability graphs, a large graph class that includes interval, permutation, trapezoid graphs, and co-graphs, and we present the first \mathcal{O}(mn) time algorithm to compute a maximum weighted induced matching on cocomparability graphs; an improvement from the best known \mathcal{O}(n^4) time algorithm for the unweighted case. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8217Approximate Nearest Neighbors Search Without False Negatives For l_2 For c>sqrt{loglog{n}}
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8218
In this paper, we report progress on answering the open problem presented by Pagh [11], who considered the near neighbor search without false negatives for the Hamming distance. We show new data structures for solving the c-approximate near neighbors problem without false negatives for Euclidean high dimensional space\mathcal{R}^d. These data structures work for any c = \omega(\sqrt{\log{\log{n}}}), where n is the number of points in the input set, with poly-logarithmic query time and polynomialpre-processing time. This improves over the known algorithms, which require c to be \Omega(\sqrt{d}). This improvement is obtained by applying a sequence of reductions, which are interesting on their own. First, we reduce the problem to d instances of dimension logarithmic in n. Next, these instances are reduced to a number of c-approximate near neighbor search without false negatives instances in \big(\Rspace^k\big)^L space equipped with metric m(x,y) = \max_{1 \le i \leL}(\dist{x_i - y_i}_2). Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8218Faster Algorithms for Growing Prioritized Disks and Rectangles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8219
Motivated by map labeling, we study the problem in which we are given a collection of n disks in the plane that grow at possibly different speeds. Whenever two disks meet, the one with the higher index disappears. This problem was introduced by Funke, Krumpe, and Storandt[IWOCA 2016].We provide the first general subquadratic algorithm for computingthe times and the order of disappearance.Our algorithm also works for other shapes (such as rectangles) and in any fixed dimension. Using quadtrees, we provide an alternative algorithm that runs in near linear time, although this second algorithm has a logarithmic dependence on either the ratio of the fastest speed to the slowest speed of disksor the spread of the disk centers(the ratio of the maximum to the minimum distance between them).Our result improves the running times of previous algorithms byFunke, Krumpe, andStorandt [IWOCA 2016], Bahrdt et al. [ALENEX 2017], andFunke and Storandt [EWCG 2017].Finally, we give an \Omega(n\log n) lower bound on the problem, showing that our quadtree algorithms are almost tight.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8219Envy-free Matchings with Lower Quotas
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8220
While every instance of the Hospitals/Residents problem admits a stable matching, the problem with lower quotas (HR-LQ) has instances with no stable matching. For such an instance, we expect the existence of an envy-free matching, which is a relaxation of a stable matching preserving a kind of fairness property.In this paper, we investigate the existence of an envy-free matching in several settings, in which hospitals have lower quotas. We first provide an algorithm that decides whether a given HR-LQ instance has an envy-free matching or not. Then, we consider envy-freeness in the Classified Stable Matching model due to Huang (2010), i.e., each hospital has lower and upper quotas on subsets of doctors. We show that, for this model, deciding the existence of an envy-free matching is NP-hard in general, but solvable in polynomial time if quotas are paramodular.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8220Tilt Assembly: Algorithms for Micro-Factories that Build Objects with Uniform External Forces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8221
We present algorithmic results for the parallel assembly of many micro-scale objects in two and three dimensions from tiny particles, which has been proposed in the context of programmable matter and self-assembly for building high-yield micro-factories. The underlying model has particles moving under the influence of uniform external forces until they hit an obstacle; particles can bond when being forced together with another appropriate particle.Due to the physical and geometric constraints, not all shapes can be built in this manner; this gives rise to the Tilt Assembly Problem (TAP) of deciding constructibility. For simply-connected polyominoes P in 2D consisting of N unit-squares ("tiles"), we prove that TAP can be decided in O(N log N) time. For the optimization variant MaxTAP (in which theobjective is to construct a subshape of maximum possible size), we show polyAPX-hardness: unless P=NP, MaxTAP cannot be approximated within a factor of N^(1/3); for tree-shaped structures, we give an N^(1/2)-approximation algorithm. For the efficiency of the assembly process itself, we show that any constructible shape allows pipelined assembly, which produces copies of P in O(1) amortized time, i.e., N copies of P in O(N) time steps. These considerations can be extended to three-dimensional objects: For the class of polycubes P we prove that it is NP-hard to decide whether it is possible to construct a path between two points of P; it is also NP-hard to decide constructibility of a polycube P. Moreover, it is expAPX-hard to maximize a path from a given start point.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8221Routing on the Visibility Graph
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8222
We consider the problem of routing on a network in the presence of line segment constraints (i.e., obstacles that edges in our network are not allowed to cross). Let P be a set of n vertices in the plane and let S be a set of line segments between the vertices in P, with no two line segments intersecting properly. We present two 1-local O(1)-memory routing algorithms on the visibility graph of P with respect to a set of constraints S (i.e., the algorithms never look beyond the direct neighbours of the current location and store only a constant amount of information). Contrary to all existing routing algorithms, our routing algorithms do not require us to compute a plane subgraph of the visibility graph in order to route on it. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8222Embedding Graphs into Embedded Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8223
A (possibly degenerate) drawing of a graph G in the plane is approximable by an embedding if it can be turned into an embedding by an arbitrarily small perturbation.We show that testing, whether a drawing of a planar graph G in the plane is approximable by an embedding, can be carried out in polynomial time, if a desired embedding of G belongs to a fixed isotopy class, i.e., the rotation system (or equivalently the faces) of the embedding of G and the choice of outer face are fixed.In other words, we show that c-planarity with embedded pipes is tractable for graphs with fixed embeddings.To the best of our knowledge an analogous result was previously known essentially only when G is a cycle.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8223Jointly Stable Matchings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8224
In the stable marriage problem, we are given a set of men, a set of women, and each person's preference list. Our task is to find a stable matching, that is, a matching admitting no unmatched (man, woman)-pair each of which improves the situation by being matched together. It is known that any instance admits at least one stable matching. In this paper, we consider a natural extension where k (>= 2) sets of preference lists L_i (1 <= i <= k) over the same set of people are given, and the aim is to find a jointly stable matching, a matching that is stable with respect to all L_i. We show that the decision problem is NP-complete already for k=2, even if each person's preference list is of length at most four, while it is solvable in linear time for any k if each man's preference list is of length at most two (women's lists can be of unbounded length). We also show that if each woman's preference lists are same in all L_i, then the problem can be solved in linear time.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8224Study of a Combinatorial Game in Graphs Through Linear Programming
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8225
In the Spy Game played on a graph G, a single spy travels the ertices of G at speed s, while multiple slow guards strive to have, at all times, one of them within distance d of that spy. In order to determine the smallest number of guards necessary for this task, we analyze the game through a Linear Programming formulation and the fractional strategies it yields for the guards. We then show the equivalence of fractional and integral strategies in trees. This allows us to design a polynomial-time algorithm for computing an optimal strategy in this class of graphs. Using duality in Linear Programming, we also provide non-trivial bounds on the fractional guardnumber of grids and torus. We believe that the approach using fractional relaxation and Linear Programming is promising to obtain new results in the field of combinatorial games.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8225Dominance Product and High-Dimensional Closest Pair under L_infty
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8226
Given a set $S$ of $n$ points in \mathbb{R}^d, the Closest Pair problem is to find a pair of distinct points in S at minimum distance. When d is constant, there are efficient algorithms that solve this problem, and fast approximate solutions for general d.However, obtaining an exact solution in very high dimensions seems to be much less understood.We consider the high-dimensional L_\infty Closest Pair problem, where d=n^r for some r > 0, and the underlying metric is L_\infty. We improve and simplify previous results for L_\infty Closest Pair, showing that it can be solved by a deterministic strongly-polynomial algorithm that runs in O(DP(n,d)\log n) time, and by a randomized algorithm that runs in O(DP(n,d)) expected time, where DP(n,d) is the time bound for computing the dominance product for n points in \mathbb{R}^d.That is a matrix D, such thatD[i,j] = \bigl| \{k \mid p_i[k] \leq p_j[k]\} \bigr|; this is the number of coordinates at which p_j dominates p_i.For integer coordinates from some interval [-M, M], we obtain an algorithm that runs in \tilde{O}\left(\min\{Mn^{\omega(1,r,1)},\, DP(n,d)\}\right) time, where \omega(1,r,1) is the exponent of multiplying an n \times n^r matrix by an n^r \times n matrix.We also give slightly better bounds for DP(n,d), by using more recent rectangular matrix multiplication bounds.Computing the dominance product itself is an important task, since it is applied in many algorithms as a major black-box ingredient, such as algorithms for APBP (all pairs bottleneck paths),and variants of APSP (all pairs shortest paths).Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8226On the Convergence Time of a Natural Dynamics for Linear Programming
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8227
We consider a system of nonlinear ordinary differential equations for the solution of linear programming (LP) problems that was first proposed in the mathematical biology literature as a model for the foraging behavior of acellular slime mold Physarum polycephalum, and more recently considered as a method to solve LP instances. We study the convergence time of the continuous Physarum dynamics in the context of the linear programming problem, and derive a new time bound to approximate optimality that depends on the relative entropy between projected versions of the optimal point and of the initial point. The bound scales logarithmically with the LP cost coefficients and linearly with the inverse of the relative accuracy, establishing the efficiency of the dynamics for arbitrary LP instances with positive costs. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8227On Maximal Cliques with Connectivity Constraints in Directed Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8228
Finding communities in the form of cohesive subgraphs is a fundamental problem in network analysis. In domains that model networks as undirected graphs, communities are generally associated with dense subgraphs, and many community models have been proposed. Maximal cliques are arguably the most widely studied among such models, with early works dating back to the '60s, and a continuous stream of research up to the present. In domains that model networks as directed graphs, several approaches for community detection have been proposed, but there seems to be no clear model of cohesive subgraph, i.e., of what a community should look like. We extend the fundamental model of clique to directed graphs, adding the natural constraint of strong connectivity within the clique. We characterize the problem by giving a tight bound for the number of such cliques in a graph, and highlighting useful structural properties. We then exploit these properties to produce the first algorithm with polynomial delay for enumerating maximal strongly connected cliques.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8228Improved Algorithms for Scheduling Unsplittable Flows on Paths
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8229
In this paper, we investigate offline and online algorithms for Round-UFPP, the problem of minimizing the number of rounds required to schedule a set of unsplittable flows of non-uniform sizeson a given path with non-uniform edge capacities. Round-UFPP is NP-hard and constant-factor approximation algorithms are known under the no bottleneck assumption (NBA), which stipulates that maximum size of a flow is at most the minimum edge capacity. We study Round-UFPPwithout the NBA, and present improved online and offline algorithms. We first study offline Round-UFPP for a restricted class of instances called alpha-small, where the size of each flow is atmost alpha times the capacity of its bottleneck edge, and present an O(log(1/(1 - alpha)))-approximation algorithm. Our main result is an online O(log log cmax)-competitive algorithm for Round-UFPPfor general instances, where cmax is the largest edge capacities, improving upon the previous best bound of O(log cmax) due to [16]. Our result leads to an offline O(min(log n, log m, log log cmax))-approximation algorithm and an online O(min(log m, log log cmax))-competitive algorithm for Round-UFPP, where n is the number of flows and m is the number of edges.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8229Independent Feedback Vertex Set for P_5-free Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8230
The NP-complete problem Feedback Vertex Set is to decide if it is possible, for a given integer k>=0, to delete at most k vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the 3-Colouring problem, or equivalently, to the problem of deciding if a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for H-free graphs. We prove that it is NP-complete if H contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for P_4-free graphs. We show that it remains in P for P_5-free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks if a graph has an independent odd cycle transversal of size at most k for a given integer k>=0.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8230Range-Efficient Consistent Sampling and Locality-Sensitive Hashing for Polygons
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8231
Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces.The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity.We consider LSH for objects that can be represented as point sets in either one or two dimensions.To make the point sets finite size we consider the subset of points on a grid.Directly applying LSH (e.g. min-wise hashing) to these point sets would require time proportional to the number of points.We seek to achieve time that is much lower than direct approaches.Technically, we introduce new primitives for range-efficient consistent sampling (of independent interest), and show how to turn such samples into LSH values.Another application of our technique is a data structure for quickly estimating the size of the intersection or union of a set of preprocessed polygons.Curiously, our consistent sampling method uses transformation to a geometric problem.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8231Fast Compressed Self-Indexes with Deterministic Linear-Time Construction
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8232
We introduce a compressed suffix array representation that, on a text T of length n over an alphabet of size \sigma, can be built in O(n) deterministic time, within O(n\log\sigma) bits of working space, and counts the number of occurrences of any pattern P in T in time O(|P| + \log\log_w \sigma) on a RAM machine of w=\Omega(\log n)-bit words. This new index outperforms all the other compressed indexes that can be built in linear deterministic time, and some others. The only faster indexes can be built in linear time only in expectation, or require \Theta(n\log n) bits.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8232An FPTAS of Minimizing Total Weighted Completion Time on Single Machine with Position Constraint
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8233
In this paper we study the classical scheduling problem of minimizing the total weighted completion time on a single machine with the constraint that one specific job must be scheduled at a specified position. We give dynamic programs with pseudo-polynomial running time, and a fully polynomial-time approximation scheme (FPTAS).Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8233Non-Crossing Geometric Steiner Arborescences
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8234
Motivated by the question of simultaneous embedding of several flow maps, we consider the problem of drawing multiple geometric Steiner arborescences with no crossings in the rectilinear and in the angle-restricted setting. When terminal-to-root paths are allowed to turn freely, we show that two rectilinear Steiner arborescences have a non-crossing drawing if neither tree necessarily completely disconnects the other tree and if the roots of both trees are "free". If the roots are not free, then we can reduce the decision problem to 2SAT. If terminal-to-root paths are allowed to turn only at Steiner points, then it is NP-hard to decide whether multiple rectilinear Steiner arborescences have a non-crossing drawing. The setting of angle-restricted Steiner arborescences is more subtle than the rectilinear case. Our NP-hardness result extends, but testing whether there exists a non-crossing drawing if the roots of both trees are free requires additional conditions to be fulfilled.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8234Settlement Fund Circulation Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8235
In the economic activities, the central bank has an important role to cover payments of banks, when they are short of funds to clear their debts. For this purpose, the central bank timely puts funds so that the economic activities go smooth. Since payments in this mechanism are processed sequentially, the total amount of funds put by the central bank critically depends on the order of the payments. Then an interest goes to the amount to prepare if the order of the payments can be controlled by the central bank, or if it is determined under the worst case scenario. This motivates us to introduce a brand-new problem, which we call the settlement fund circulation problem. The problems are formulated as follows: Let G=(V,A) be a directed multigraph with a vertex set V and an arc set A. Each arc a\in A is endowed debt d(a)\ge 0, and the debts are settled sequentially under a sequence \pi of arcs. Each vertex v\in V is put fund in the amount of p_{\pi}(v)\ge 0 under the sequence. The minimum/maximum settlement fund circulation problem (Min-SFC/Max-SFC) in a given graph G with debts d: A\rightarrow \mathbb{R}_{+}\cup \{0\} asks to find a bijection \pi:A\to \{1,2,\dots,|A|\} that minimizes/maximizes the total funds \sum _{v\in V}p_{\pi }(v). In this paper, we show that both Min-SFC and Max-SFC are NP-hard;in particular, Min-SFC is (I) strongly NP-hard even if G is (i) a multigraph with |V|=2 or (ii) a simple graph with treewidth at most two,and is (II) (not necessarily strongly) NP-hard for simple trees of diameter four,while it is solvable in polynomial time for stars. Also, we identify several polynomial time solvable cases for both problems.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8235Improved Bounds for Online Dominating Sets of Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8236
The online dominating set problem is an online variant of the minimum dominating set problem, which is one of the most important NP-hard problems on graphs. This problem is defined as follows: Given an undirected graph G = (V, E), in which V is a set of vertices and E is a set of edges. We say that a set D \subseteq V of vertices is a dominating set of G if for each v \in V \setminus D, there exists a vertex u \in D such that {u, v} \in E. The vertices are revealed to an online algorithm one by one over time. When a vertex is revealed, edges between the vertex and vertices revealed in the past are also revealed. A revelaed subtree is connected at any time. Immediately after the revelation of each vertex, an online algorithm can choose vertices which were already revealed irrevocably and must maintain a dominating set of a graph revealed so far. The cost of an algorithm on a given tree is the number of vertices chosen by it, and its objective is to minimize the cost. Eidenbenz (Technical report, Institute of Theoretical Computer Science, ETH Zurich, 2002) and Boyar et al. (SWAT 2016) studied the case in which given graphs are trees. They designed a deterministic online algorithm whose competitive ratio is at most three, and proved that a lower bound on the competitive ratio of any deterministic algorithm is two. In this paper, we also focus on trees. We establish a matching lower bound for any deterministic algorithm. Moreover, we design a randomized online algorithm whose competitive ratio is at most 5/2 = 2.5, and show that the competitive ratio of any randomized algorithm is at least 4/3 \approx 1.333.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8236Routing in Polygonal Domains
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8237
We consider the problem of routing a data packet through the visibility graph of a polygonal domain P with n vertices and h holes. We may preprocess P to obtain a label and a routing table for each vertex. Then, we must be able to route a data packet between any two vertices p and q of P , where each step must use only the label of the target node q and the routing table of the current node.For any fixed eps > 0, we pre ent a routing scheme that always achieves a routing path that exceeds the shortest path by a factor of at most 1 + eps. The labels have O(log n) bits, and the routing tables are of size O((eps^{-1} + h) log n). The preprocessing time is O(n^2 log n + hn^2 + eps^{-1}hn). It can be improved to O(n 2 + eps^{-1}n) for simple polygons.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8237Smart Contract Execution - the (+-)-Biased Ballot Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8238
Transaction system build on top of blockchain, especially smart contract, is becoming an important part of world economy. However, there is a lack of formal study on the behavior of users in these systems, which leaves the correctness and security of such system without a solid foundation. Unlike mining, in which the reward for mining a block is fixed, different execution results of a smart contract may lead to significantly different payoffs of users, which gives more incentives for some user to follow a branch that contains a wrong result, even if the branch is shorter. It is thus important to understand the exact probability that a branch is being selected by the system. We formulate this problem as the (+-)-Biased Ballot Problem as follows: there are n voters one by one voting for either of the two candidates A and B. The probability of a user voting for A or B depends on whether the difference between the current votes of A and B is positive or negative. Our model takes into account the behavior of three different kinds of users when a branch occurs in the system -- users having preference over a certain branch based on the history of their transactions, and users being indifferent and simply follow the longest chain. We study two important probabilities that are closely related with a blockchain based system - the probability that A wins at last, and the probability that A receives d votes first. We show how to recursively calculate the two probabilities for any fixed n and d, and also discuss their asymptotic values when n and d are sufficiently large.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8238Orthogonal Vectors Indexing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8239
In the recent years, intensive research work has been dedicated to prove conditional lower bounds in order to reveal the inner structure of the class P. These conditional lower bounds are based on many popular conjectures on well-studied problems. One of the most heavily used conjectures is the celebrated Strong Exponential Time Hypothesis (SETH). It turns out that conditional hardness proved based on SETH goes, in many cases, through an intermediate problem - the Orthogonal Vectors (OV) problem.Almost all research work regarding conditional lower bound was concentrated on time complexity. Very little attention was directed toward space complexity. In a recent work, Goldstein et al.[WADS '17] set the stage for proving conditional lower bounds regarding space and its interplay with time. In this spirit, it is tempting to investigate the space complexity of a data structure variant of OV which is called OV indexing. In this problem n boolean vectors of size clogn are given for preprocessing. As a query, a vector v is given and we are required to verify if there is an input vector that is orthogonal to it or not.This OV indexing problem is interesting in its own, but it also likely to have strong implications on problems known to be conditionally hard, in terms of time complexity, based on OV. Having this in mind, we study OV indexing in this paper from many aspects. We give some space-efficient algorithms for the problem, show a tradeoff between space and query time, describe how to solve its reporting variant, shed light on an interesting connection between this problem and the well-studied SetDisjointness problem and demonstrate how it can be solved more efficiently on random input.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8239A New Balanced Subdivision of a Simple Polygon for Time-Space Trade-off Algorithms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8240
We are given a read-only memory for input and a write-only stream for output. For a positive integer parameter s, an s-workspace algorithm is an algorithm using only O(s) words of workspace in addition to the memory for input. In this paper, we present an O(n^2/s)-time s-workspace algorithm for subdividing a simple polygon into O(\min\{n/s,s\}) subpolygons of complexity O(\max\{n/s,s\}).As applications of the subdivision, the previously best known time-space trade-offs for the following three geometric problems are improved immediately: (1) computing the shortest path between two points inside a simple n-gon, (2) computing the shortest path tree from a point inside a simple n-gon, (3) computing a triangulation of a simple n-gon. In addition, we improve the algorithm for the second problem even further.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8240Finding Pairwise Intersections of Rectangles in a Query Rectangle
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8241
We consider the following problem: Preprocess a set S of n axis-parallel boxes in \mathbb{R}^d so that given a query of an axis-parallel box Q in \mathbb{R}^d, the pairs of boxes of S whose intersection intersects the query box can be reported efficiently. For the case that d=2, we present a data structure of size O(n\log n) supporting O(\log n+k) query time, where k is the size of the output. This improves the previously best known result by de Berg et al. which requires O(\log n\log^* n+ k\log n) query time using O(n\log n) space.There has been no known result for this problem for higher dimensions, except that for d=3, the best known data structure supportsO(\sqrt{n}+k\log^2\log^* n) query time using O(n\sqrt {n}\log n) space. For a constant d>2, we present a data structure supporting O(n^{1-\delta}\log^{d-1} n + k \polylog n) query time for any constant 1/d\leq\delta<1.The size of the data structure is O(n^{\delta d}\log n) if 1/d\leq\delta<1/2, or O(n^{\delta d-2\delta+1}) if 1/2\leq \delta<1. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8241Satisfiability Algorithm for Syntactic Read-$k$-times Branching Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8242
The satisfiability of a given branching program is to determine whether there exists a consistent path from the root to 1-sink.In a syntactic read-k-times branching program, each variable appears at most k times in any path from the root to a sink.We provide a satisfiability algorithm for syntactic read-k-times branching programs with n variables and m edges that runs in time O\left(\poly(n, m^{k^2})\cdot 2^{(1-\mu(k))n}\right), where \mu(k) = \frac{1}{4^{k+1}}. Our algorithm is based on the decomposition technique shown by Borodin, Razborov and Smolensky [Computational Complexity, 1993]. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8242Non-approximability and Polylogarithmic Approximations of the Single-Sink Unsplittable and Confluent Dynamic Flow Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8243
Dynamic Flows were introduced by Ford and Fulkerson in 1958 to model flows over time. They define edge capacities to be the total amount of flow that can enter an edge in one time unit. Each edge also has a length, representing the time needed to traverse it. Dynamic Flows have been used to model many problems including traffic congestion, hop-routing of packets and evacuation protocols in buildings. While the basic problem of moving the maximal amount of supplies from sources to sinks is polynomial time solvable, natural minor modifications can make it NP-hard.One such modification is that flows be confluent, i.e., all flows leaving a vertex must leave along the same edge. This corresponds to natural conditions in, e.g., evacuation planning and hop routing.We investigate the single-sink Confluent Quickest Flow problem. The input is a graph with edge capacities and lengths, sources with supplies and a sink. The problem is to find a confluent flow minimizing the time required to send supplies to the sink. Our main results include: a) Logarithmic Non-Approximability: Directed Confluent Quickest Flows cannot be approximated in polynomial time with an O(\log n) approximation factor, unless P=NP. b) Polylogarithmic Bicriteria Approximations: Polynomial time (O(\log^8 n), O(\log^2 \kappa)) bicritera approximation algorithms for the Confluent Quickest Flow problem where \kappa is the number of sinks, in both directed and undirected graphs. Corresponding results are also developed for the Confluent Maximum Flow over time problem. The techniques developed also improve recent approximation algorithms for static confluent flows.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8243Structural Parameters, Tight Bounds, and Approximation for (k,r)-Center
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8244
In (k,r)-Center we are given a (possibly edge-weighted) graph and are asked to select at most k vertices (centers), so that all other vertices are at distance at most r from a center. In this paper we provide a number of tight fine-grained bounds on the complexity of this problem with respect to various standard graph parameters. Specifically:- For any r>=1, we show an algorithm that solves the problem in O*((3r+1)^cw) time, where cw is the clique-width of the input graph, as well as a tight SETH lower bound matching this algorithm's performance. As a corollary, for r=1, this closes the gap that previously existed on the complexity of Dominating Set parameterized by cw.- We strengthen previously known FPT lower bounds, by showing that (k,r)-Center is W[1]-hard parameterized by the input graph's vertex cover (if edge weights are allowed), or feedback vertex set, even if k is an additional parameter. Our reductions imply tight ETH-based lower bounds. Finally, we devise an algorithm parameterized by vertex cover for unweighted graphs.- We show that the complexity of the problem parameterized by tree-depth is 2^Theta(td^2) by showing an algorithm of this complexity and a tight ETH-based lower bound.We complement these mostly negative results by providing FPT approximation schemes parameterized by clique-width or treewidth which work efficiently independently of the values of k,r. In particular, we give algorithms which, for any epsilon>0, run in time O*((tw/epsilon)^O(tw)), O*((cw/epsilon)^O(cw)) and return a (k,(1+epsilon)r)-center, if a (k,r)-center exists, thus circumventing the problem's W-hardness. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8244An Efficient Fixed-Parameter Algorithm for the 2-Plex Bipartition Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8245
Given a graph G=(V, E), an s-plex S\subseteq V is a vertex subset such that for v\in S the degree of v in G[S] is at least |S|-s. An s-plex bipartition \mathcal{P}=(V_1, V_2) is a bipartition of G=(V, E), V=V_1\uplus V_2, satisfying that both V_1 and V_2 are s-plexes. Given an instance G=(V, E) and a parameter k, the s-Plex Bipartition problem asks whether there exists an s-plex bipartition of G such that min{|V_1|, |V_2|\}\leq k. The s-Plex Bipartition problem is NP-complete. However, it is still open whether this problem is fixed-parameter tractable. In this paper, we give a fixed-parameter algorithm for 2-Plex Bipartition running in time O*(2.4143^k). A graph G = (V, E) is called defective (p, d)-colorable if it admits a vertex coloring with p colors such that each color class in G induces a subgraph of maximum degree at most d. A graph G admits an s-plex bipartition if and only if the complement graph of G, \bar{G}, admits a defective (2, s-1)-coloring such that one of the two color classes is of size at most k. By applying our fixed-parameter algorithm as a subroutine, one can find a defective (2,1)-coloring with one of the two colors of minimum cardinality for a given graph in O*(1.5539^n) time where n is the number of vertices in the input graph.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8245On Directed Covering and Domination Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8246
In this paper, we study covering and domination problems on directed graphs.Although undirected Vertex Cover and Edge Dominating Set are well-studied classical graph problems, the directed versions have not been studied much due to the lack of clear definitions.We give natural definitions for Directed r-In (Out) Vertex Cover and Directed (p,q)-Edge Dominating Set as directed generations of Vertex Cover and Edge Dominating Set.For these problems, we show that(1) Directed r-In (Out) Vertex Cover and Directed (p,q)-Edge Dominating Set are NP-complete on planar directed acyclic graphs except when r=1 or (p,q)=(0,0),(2) if r>=2, Directed r-In (Out) Vertex Cover is W[2]-hard and (c*ln k)-inapproximable on directed acyclic graphs,(3) if either p or q is greater than 1, Directed (p,q)-Edge Dominating Set is W[2]-hard and (c*ln k)-inapproximable on directed acyclic graphs,(4) all problems can be solved in polynomial time on trees, and(5) Directed (0,1),(1,0),(1,1)-Edge Dominating Set are fixed-parameter tractable in general graphs.The first result implies that (directed) r-Dominating Set on directed line graphs is NP-complete even if r=1.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8246Hybrid VCSPs with Crisp and Valued Conservative Templates
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8247
A constraint satisfaction problem (CSP) is a problem of computing a homomorphism R -> G between two relational structures, e.g. between two directed graphs.Analyzing its complexity has been a very fruitful research direction, especially for fixed template CSPs (or, non-uniform CSPs), denoted CSP(G),in which the right side structure G is fixed and the left side structure R is unconstrained. Recently, the hybrid setting, written CSP_H(G), where both sides are restricted simultaneously, attracted some attention.It assumes that R is taken from a class of relational structures H (called the structural restriction) that additionally is closed under inverse homomorphisms. The last property allows to exploit an algebraic machinery that has been developed for fixed template CSPs. The key concept that connects hybrid CSPs with fixed-template CSPs is the so called lifted language. Namely, this is a constraint language G_R that can be constructed from an input R. The tractability of the language G_R for any input R from H is a necessary condition for the tractability of the hybrid problem.In the first part we investigate templates G for which the latter condition is not only necessary, but also is sufficient. We call such templates G widely tractable. For this purpose, we construct from G a new finite relational structure G' and define a maximal structural restriction H_0 as a class of structures homomorphic to G'.For the so called strongly BJK templates that probably captures all templates, we prove that wide tractability is equivalent to the tractability of CSP_{H_0}(G).Our proof is based on the key observation that R is homomorphic to G' if and only if the core of G_R is preserved by a Siggers polymorphism.Analogous result is shown for conservative valued CSPs.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8247An Efficient Sum Query Algorithm for Distance-based Locally Dominating Functions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8248
In this paper, we consider the following sum query problem:Given a point set P in R^d, and a distance-based function f(p,q) (i.e. a function of the distance between p and q) satisfying some general properties,the goal is to develop a data structure and a query algorithm for efficiently computing a (1+epsilon)-approximate solution to the sum sum_{p in P} f(p,q) for any query point q in R^d and any small constant epsilon>0. Existing techniques for this problem are mainly based on some core-set techniques which often have difficulties to deal with functions with local domination property. Based on several new insights to this problem, we develop in this paper a novel technique to overcome these encountered difficulties. Our algorithm is capable of answering queries with high success probability in time no more than ~O_{epsilon,d}(n^{0.5 + c}), and the underlying data structure can be constructed in ~O_{epsilon,d}(n^{1+c}) time for any c>0, where the hidden constant has only polynomial dependence on 1/epsilon and d.Our technique is simple and can be easily implemented for practical purpose.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8248On the Number of p4-Tilings by an n-Omino
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8249
A plane tiling by the copies of a polyomino is called isohedral if every pair of copies in the tiling has a symmetry of the tiling that maps one copy to the other. We show that, for every $n$-omino (i.e., polyomino consisting of n cells),the number of non-equivalent isohedral tilings generated by 90 degree rotations, so called p4-tilings or quarter-turn tilings, is bounded by a constant (independent of n). The proof relies on the analysis of the factorization of the boundary word of a polyomino.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8249Dynamic Conflict-Free Colorings in the Plane
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8250
We study dynamic conflict-free colorings in the plane, where the goal is to maintain a conflict-free coloring (CF-coloring for short) under insertions and deletions.- First we consider CF-colorings of a set S of unit squares with respect to points. Our method maintains a CF-coloring that uses O(log n) colors at any time, where n is the current number of squares in S, at the cost of only O(log n) recolorings per insertion or deletion We generalize the method to rectangles whose sides have lengths in the range [1, c], where c is a fixed constant. Here the number of used colors becomes O(log^2 n). The method also extends to arbitrary rectangles whose coordinates come from a fixed universe of size N, yielding O(log^2 N log^2 n) colors. The number of recolorings for both methods stays in O(log n).- We then present a general framework to maintain a CF-coloring under insertions for sets of objects that admit a unimax coloring with a small number of colors in the static case. As an application we show how to maintain a CF-coloring with O(log^3 n) colors for disks (or other objects with linear union complexity) with respect to points at the cost of O(log n) recolorings per insertion. We extend the framework to the fully-dynamic case when the static unimax coloring admits weak deletions. As an application we show how to maintain a CF-coloring with O(sqrt(n) log^2 n) colors for points with respect to rectangles, at the cost of O(log n) recolorings per insertion and O(1) recolorings per deletion.These are the first results on fully-dynamic CF-colorings in the plane, and the first results for semi-dynamic CF-colorings for non-congruent objects.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8250Temporal Hierarchical Clustering
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8251
We study hierarchical clusterings of metric spaces that change over time. This is a natural geo- metric primitive for the analysis of dynamic data sets. Specifically, we introduce and study the problem of finding a temporally coherent sequence of hierarchical clusterings from a sequence of unlabeled point sets. We encode the clustering objective by embedding each point set into an ultrametric space, which naturally induces a hierarchical clustering of the set of points. We enforce temporal coherence among the embeddings by finding correspondences between successive pairs of ultrametric spaces which exhibit small distortion in the Gromov-Hausdorff sense. We present both upper and lower bounds on the approximability of the resulting optimization problems.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8251On-the-Fly Array Initialization in Less Space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8252
We show that for all given n,t,w in {1,2,...} with n<2^w, an array of n entries of w bits each can be represented on a word RAM with a word length of w bits in at most nw+ceil(n(t/(2 w))^t) bits of uninitialized memory to support constant-time initialization of the whole array and O(t)-time reading and writing of individual array entries. At one end of this tradeoff, we achieve initialization and access (i.e., reading and writing) in constant time with nw+ceil(n/w^t) bits for arbitrary fixed t, to be compared with nw+Theta(n) bits for the best previous solution, and at the opposite end, still with constant-time initialization, we support O(log n)-time access with just nw+1 bits, which is optimal for arbitrary access times if the initialization executes fewer than n steps.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8252Complexity of the Multi-Service Center Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8253
The multi-service center problem is a variant of facility location problems. In the problem, we consider locating p facilities on a graph, each of which provides distinct service required by all vertices. Each vertex incurs the cost determined by the sum of the weighted distances to the p facilities. The aim of the problem is to minimize the maximum cost among all vertices. This problem is known to be NP-hard for general graphs, while it is solvable in polynomial time when p is a fixed constant. In this paper, we give sharp analyses for the complexity of the problem from the viewpoint of graph classes and weights on vertices. We first propose a polynomial-time algorithm for trees when p is a part of input. In contrast, we prove that the problem becomes strongly NP-hard even for cycles. We also show that when vertices are allowed to have negative weights, the problem becomes NP-hard for paths of only three vertices and strongly NP-hard for stars.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8253On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8254
The Johnson-Lindenstrauss lemma is one of the corner stone results in dimensionality reduction. It says that given N, for any set of N,vectors X \subset R^n, there exists a mapping f : X --> R^m such that f(X) preserves all pairwise distances between vectors in X to within(1 ± \eps) if m = O(\eps^{-2} lg N). Much effort has gone into developingfast embedding algorithms, with the Fast Johnson-Lindenstrausstransform of Ailon and Chazelle being one of the most well-knowntechniques. The current fastest algorithm that yields the optimal m =O(\eps{-2}lg N) dimensions has an embedding time of O(n lg n + \eps^{-2} lg^3 N). An exciting approach towards improving this, due to Hinrichs and Vybíral, is to use a random m times n Toeplitz matrix for theembedding. Using Fast Fourier Transform, the embedding of a vector canthen be computed in O(n lg m) time. The big question is of coursewhether m = O(\eps^{-2} lg N) dimensions suffice for this technique. Ifso, this would end a decades long quest to obtain faster and fasterJohnson-Lindenstrauss transforms. The current best analysis of theembedding of Hinrichs and Vybíral shows that m = O(\eps^{-2} lg^2 N)dimensions suffice. The main result of this paper, is a proof thatthis analysis unfortunately cannot be tightened any further, i.e.,there exists a set of N vectors requiring m = \Omega(\eps^{-2} lg^2 N)for the Toeplitz approach to work.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8254On Structural Parameterizations of the Edge Disjoint Paths Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8255
In this paper we revisit the classical Edge Disjoint Paths (EDP) problem, where one is given an undirected graph G and a set of terminal pairs P and asks whether G contains a set of pairwise edge-disjoint paths connecting every terminal pair in P. Our focus lies on structural parameterizations for the problem that allow for efficient (polynomial-time or fpt) algorithms. As our first result, we answer an open question stated in Fleszar, Mnich, and Spoerhase (2016), by showing that the problem can be solved in polynomial time if the input graph has a feedback vertex set of size one. We also show that EDP parameterized by the treewidth and the maximum degree of the input graph is fixed-parameter tractable.Having developed two novel algorithms for EDP using structural restrictions on the input graph, we then turn our attention towards the augmented graph, i.e., the graph obtained from the input graph after adding one edge between every terminal pair. In constrast to the input graph, where EDP is known to remain NP-hard even for treewidth two, a result by Zhou et al. (2000) shows that EDP can be solved in non-uniform polynomial time if the augmented graph has constant treewidth; we note that the possible improvement of this result to an fpt-algorithm has remained open since then. We show that this is highly unlikely by establishing the W[1]-hardness of the problem parameterized by the treewidth (and even feedback vertex set) of the augmented graph. Finally, we develop an fpt-algorithm for EDP by exploiting a novel structural parameter of the augmented graph. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8255Structural Pattern Matching - Succinctly
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8256
Let T be a text of length n containing characters from an alphabet \Sigma, which is the union of two disjoint sets: \Sigma_s containing static characters (s-characters) and \Sigma_p containing parameterized characters (p-characters). Each character in \Sigma_p has an associated complementary character from \Sigma_p.A pattern P (also over \Sigma) matches an equal-length substring $S$ of T iff the s-characters match exactly, there exists a one-to-one function that renames the p-characters in S to the p-characters in P, and if a p-character x is renamed to another p-character y then the complement of x is renamed to the complement of y. The task is to find the starting positions (occurrences) of all such substrings S.Previous indexing solution [Shibuya, SWAT 2000], known as Structural Suffix Tree, requires \Theta(n\log n) bits of space, and can find all occ occurrences in time O(|P|\log \sigma+ occ), where \sigma = |\Sigma|.In this paper, we present the first succinct index for this problem, which occupies n \log \sigma + O(n) bits and offers O(|P|\log\sigma+ occ\cdot \log n \log\sigma) query time.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8256Crossing Number for Graphs with Bounded~Pathwidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8257
The crossing number is the smallest number of pairwise edge crossings when drawing a graph into the plane. There are only very few graph classes for which the exact crossing number is known or for which there at least exist constant approximation ratios. Furthermore, up to now, general crossing number computations have never been successfully tackled using bounded width of graph decompositions, like treewidth or pathwidth.In this paper, we for the first time show that crossing number is tractable (even in linear time) for maximal graphs of bounded pathwidth 3. The technique also shows that the crossing number and the rectilinear (a.k.a. straight-line) crossing number are identical for this graph class, and that we require only an O(n)xO(n)-grid to achieve such a drawing.Our techniques can further be extended to devise a 2-approximation for general graphs with pathwidth 3, and a 4w^3-approximation for maximal graphs of pathwidth w. This is a constant approximation for bounded pathwidth graphs.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8257Complexity of Coloring Reconfiguration under Recolorability Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8258
For an integer k \ge 1, k-coloring reconfiguration is one of the most well-studied reconfiguration problems, defined as follows: In the problem, we are given two (vertex-)colorings of a graph using k colors, and asked to transform one into the other by recoloring only one vertex at a time, while at all times maintaining a proper coloring. The problem is known to be PSPACE-complete if k \ge 4, and solvable for any graph in polynomial time if k \le 3. In this paper, we introduce a recolorability constraint on the k colors, which forbids some pairs of colors to be recolored directly. The recolorability constraint is given in terms of an undirected graph R such that each node in R corresponds to a color and each edge in R represents a pair of colors that can be recolored directly. We study the hardness of the problem based on the structure of recolorability constraints R. More specifically, we prove that the problem is PSPACE-complete if R is of maximum degree at least four, or has a connected component containing more than one cycle.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8258Barrier Coverage with Non-uniform Lengths to Minimize Aggregate Movements
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8259
Given a line segment I=[0,L], the so-called barrier, and a set of n sensors with varying ranges positioned on the line containing I, the barrier coverage problem is to move the sensors so that they cover I, while minimising the total movement. In the case when all the sensors have the same radius the problem can be solved in O(n log n) time (Andrews and Wang, Algorithmica 2017). If the sensors have different radii the problem is known to be NP-hard to approximate within a constant factor (Czyzowicz et al., ADHOC-NOW 2009).We strengthen this result and prove that no polynomial time \rho^{1-\epsilon}-approximation algorithm exists unless P=NP, where \rho is the ratio between the largest radius and the smallest radius. Even when we restrict the number of sensors that are allowed to move by a parameter k, the problem turns out to be W[1]-hard. On the positive side we show that a ((2+\epsilon)\rho+2/\epsilon)-approximation can be computed in O(n^3/\epsilon^2) time and we prove fixed-parameter tractability when parameterized by the total movement assuming all numbers in the input are integers.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8259Fully Dynamic Connectivity Oracles under General Vertex Updates
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8260
We study the following dynamic graph problem: given an undirected graph G, we maintain a connectivity oracle between any two vertices in G under any on-line sequence of vertex deletions and insertions with incident edges. We propose two algorithms for this problem: an amortized update time deterministic one and a worst case update time Monte Carlo one.Both of them allow an arbitrary number of new vertices to insert.The update time complexity of the former algorithm is no worse than the existing algorithms, which allow only limited number of vertices to insert.Moreover, for relatively dense graphs, we can expect that the update time bound of the former algorithm meets a lower bound,and that of the latter algorithm can be seen as a substantial improvement of the existing result by introducing randomization.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8260Almost Linear Time Computation of Maximal Repetitions in Run Length Encoded Strings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8261
We consider the problem of computing all maximal repetitions contained in a string that is given in run-length encoding.Given a run-length encoding of a string, we show that the maximum number of maximal repetitions contained in the string is at most m+k-1, where m is the size of the run-length encoding, and k is the number of run-length factors whose exponent is at least 2.We also show an algorithm for computing all maximal repetitions in O(m \alpha(m)) time and O(m) space, where \alpha denotes the inverse Ackermann function.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8261Decomposing a Graph into Shortest Paths with Bounded Eccentricity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8262
We introduce the problem of hub-laminar decomposition which generalizes that of computing a shortest path with minimum eccentricity (MESP). Intuitively, it consists in decomposing a graph into several paths that collectively have small eccentricity and meet only near their extremities. The problem is related to computing an isometric cycle with minimum eccentricity (MEIC). It is also linked to DNA reconstitution in the context of metagenomics in biology. We show that a graph having such a decomposition with long enough paths can be decomposed in polynomial time with approximated guaranties on the parameters of the decomposition. Moreover, such a decomposition with few paths allows to compute a compact representation of distances with additive distortion. We also show that having an isometric cycle with small eccentricity is related to the possibility of embedding the graph in a cycle with low distortion.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8262A Simple Greedy Algorithm for Dynamic Graph Orientation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8263
Graph orientations with low out-degree are one of several ways to efficiently store sparse graphs. If the graphs allow for insertion and deletion of edges, one may have to flip the orientation of some edges to prevent blowing up the maximum out-degree. We use arboricity as our sparsity measure. With an immensely simple greedy algorithm, we get parametrized trade-off bounds between out-degree and worst case number of flips, which previously only existed for amortized number of flips. We match the previous best worst-case algorithm (in O(log n) flips) for general arboricity and beat it for either constant or super-logarithmic arboricity. We also match a previous best amortized result for at least logarithmic arboricity, and give the first results with worst-case O(1) and O(sqrt(log n)) flips nearly matching degree bounds to their respective amortized solutions.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8263Precedence-Constrained Min Sum Set Cover
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8264
We introduce a version of the Min Sum Set Cover (MSSC) problem in which there are "AND" precedence constraints on the m sets. In the Precedence-Constrained Min Sum Set Cover (PCMSSC) problem, when interpreted as directed edges, the constraints induce an acyclic directed graph. PCMSSC models the aim of scheduling software tests to prioritize the rate of fault detection subject to dependencies between tests.Our greedy scheme for PCMSSC is similar to the approaches of Feige, Lovasz, and, Tetali for MSSC, and Chekuri and Motwani for precedence-constrained scheduling to minimize weighted completion time. With a factor-4 increase in approximation ratio, we reduce PCMSSC to the problem offinding a maximum-density precedence-closed sub-family of sets, where density is the ratio of sub-family union size to cardinality. We provide a greedy factor-sqrt m algorithm for maximizing density; on forests of in-trees, we show this algorithm finds an optimal solution. Harnessing an alternative greedy argument of Chekuri and Kumar for Maximum Coverage with Group Budget Constraints, on forests of out-trees, we design an algorithm with approximation ratio equal to maximum tree height.Finally, with a reduction from the Planted Dense Subgraph detection problem, we show that its conjectured hardness implies there is no polynomial-time algorithm for PCMSSC with approximation factor in O(m^{1/12-epsilon}).Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8264Sorting with Recurrent Comparison Errors
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8265
We present a sorting algorithm for the case of recurrent random comparison errors. The algorithm essentially achieves simultaneously good properties of previous algorithms for sorting n distinct elements in this model. In particular, it runs in O(n^2) time, the maximum dislocation of the elements in the output is O(log n), while the total dislocation is O(n). These guarantees are the best possible since we prove that even randomized algorithms cannot achieve o(log n) maximum dislocation with high probability, or o(n) total dislocation in expectation, regardless of theirrunning time.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8265An Improved Algorithm for Computing All the Best Swap Edges of a Tree Spanner
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8266
A tree sigma-spanner of a positively real-weighted n-vertex and m-edge undirected graph G is a spanning tree T of G which approximately preserves (i.e., up to a multiplicative stretch factor sigma) distances in G.Tree spanners with provably good stretch factors find applications in communication networks, distributed systems, and network design. However, finding an optimal or even a good tree spanner is a very hard computational task. Thus, if one has to face a transient edge failure in T, the overall effort that has to be afforded to rebuild a new tree spanner (i.e., computational costs, set-up of new links, updating of the routing tables, etc.) can be rather prohibitive. To circumvent this drawback, an effective alternative is that of associating with each tree edge a best possible (in terms of resulting stretch) swap edge -- a well-established approach in the literature for several other tree topologies. Correspondingly, the problem of computing all the best swap edges of a tree spanner is a challenging algorithmic problem, since solving it efficiently means to exploit the structure of shortest paths not only in G, but also in all the scenarios in which an edge of T has failed. For this problem we provide a very efficient solution, running in O(n^2 log^4 n) time, which drastically improves (almost by a quadratic factor in n in dense graphs!) on the previous known best result.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8266Square-Contact Representations of Partial 2-Trees and Triconnected Simply-Nested Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8267
A square-contact representation of a planar graph G = (V,E) maps vertices in V to interior-disjoint axis-aligned squares in the plane and edges in E to adjacencies between the sides of the corresponding squares. In this paper, we study proper square-contact representations of planar graphs, in which any two squares are either disjoint or share infinitely many points.We characterize the partial 2-trees and the triconnected cycle-trees allowing for such representations. For partial 2-trees our characterization uses a simple forbidden subgraph whose structure forces a separating triangle in any embedding. For the triconnected cycle-trees, a subclass of the triconnected simply-nested graphs, we use a new structural decomposition for the graphs in this family, which may be of independent interest. Finally, we study square-contact representations of general triconnected simply-nested graphs with respect to their outerplanarity index.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8267Fully-Dynamic and Kinetic Conflict-Free Coloring of Intervals with Respect to Points
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8268
We introduce the fully-dynamic conflict-free coloring problem for a set S of intervals in R^1 with respect to points, where the goal is to maintain a conflict-free coloring for S under insertions and deletions. A coloring is conflict-free if for each point p contained in some interval, p is contained in an interval whose color is not shared with any other interval containing p. We investigate trade-offs between the number of colors used and the number of intervals that are recolored upon insertion or deletion of an interval. Our results include:- a lower bound on the number of recolorings as a function of the number of colors, which implies that with O(1) recolorings per update the worst-case number of colors is Omega(log n/log log n), and that any strategy using O(1/epsilon) colors needs Omega(epsilon n^epsilon) recolorings;- a coloring strategy that uses O(log n) colors at the cost of O(log n) recolorings, and another strategy that uses O(1/epsilon) colors at the cost of O(n^epsilon/epsilon) recolorings;- stronger upper and lower bounds for special cases.We also consider the kinetic setting where the intervals move continuously (but there are no insertions or deletions); here we show how to maintain a coloring with only four colors at the cost of three recolorings per event and show this is tight.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8268Tight Approximation for Partial Vertex Cover with Hard Capacities
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8269
We consider the partial vertex cover problem with hard capacity constraints (Partial VC-HC) on hypergraphs. In this problem we are given a hypergraph G=(V,E) with a maximum edge size f and a covering requirement R. Each edge is associated with a demand, and each vertex is associated with a capacity and an (integral) available multiplicity. The objective is to compute a minimum vertex multiset such that at least R units of demand from the edges are covered by the capacities of the vertices in the multiset and the multiplicity of each vertex does not exceed its available multiplicity.In this paper we present an f-approximation for this problem, improving over a previous result of (2f+2)(1+epsilon) by Cheung et al to the tight extent possible. Our new ingredient of this work is a generalized analysis on the extreme points of the natural LP, developed from previous works, and a strengthened LP lower-bound obtained for the optimal solutions.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8269Network Optimization on Partitioned Pairs of Points
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8270
Given n pairs of points, S = {{p_1, q_1}, {p_2, q_2}, ..., {p_n, q_n}}, in some metric space, we study the problem of two-coloring the points within each pair, red and blue, to optimize the cost of a pair of node-disjoint networks, one over the red points and one over the blue points. In this paper we consider our network structures to be spanning trees, traveling salesman tours or matchings. We consider several different weight functions computed over the network structures induced, as well as several different objective functions. We show that some of these problems are NP-hard, and provide constant factor approximation algorithms in all cases. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8270Optimal Matroid Partitioning Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8271
This paper studies optimal matroid partitioning problems for various objective functions. In the problem, we are given a finite set E and k weighted matroids (E, \mathcal{I}_i, w_i), i = 1, \dots, k, and our task is to find a minimum partition (I_1,\dots,I_k) of E such that I_i \in \mathcal{I}_i for all i. For each objective function, we give a polynomial-time algorithm or prove NP-hardness. In particular, for the case when the given weighted matroids are identical and the objective function is the sum of the maximum weight in each set (i.e., \sum_{i=1}^k\max_{e\in I_i}w_i(e)), we show that the problem is strongly NP-hard but admits a PTAS.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8271Agnostically Learning Boolean Functions with Finite Polynomial Representation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8272
Agnostic learning is an extremely hard task in computational learning theory. In this paper we revisit the results in [Kalai et al. SIAM J. Comput. 2008] on agnostically learning boolean functions with finite polynomial representation and those that can be approximated by the former. An example of the former is the class of all boolean low-degree polynomials. For the former, [Kalai et al. SIAM J. Comput. 2008] introduces the l_1-polynomial regression method to learn them to error opt+epsilon. We present a simple instantiation for one step in the method and accordingly give the analysis. Moreover, we show that even ignoring this step can bring a learning result of error 2opt+epsilon as well. Then we consider applying the result for learning concept classes that can be approximated by the former to learn richer specific classes. Our result is that the class of s-term DNF formulae can be agnostically learned to error opt+epsilon with respect to arbitrary distributions for any epsilon in time poly(n^d, 1/epsilon), where d=O(\sqrt{n}\cdot s\cdot \log s\log^2(1/epsilon)).Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8272Weighted Linear Matroid Parity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8273
The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Nevertheless, Lovasz (1978) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. This talk presents a recently developed polynomial-time algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach based on the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8273Computational Philosophy: On Fairness in Automated Decision Making
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8274
As more and more of our lives are taken over by automated decision making systems (whether it be for hiring, college admissions, criminal justice or loans), we have begun to ask whether these systems are making decisions that humans would consider fair, or non-discriminatory. The problem is that notions of fairness, discrimination, transparency and accountability are concepts in society and the law that have no obvious formal analog. But our algorithms speak the language of mathematics. And so if we want to encode our beliefs into automated decision systems, we must formalize them precisely, while still capturing the natural imprecision and ambiguity in these ideas. In this talk, I'll survey the new field of fairness, accountability and transparency in computer science. I'll focus on how we formalize these notions, how they connect to traditional notions in theoretical computer science, and even describe some impossibility results that arise from this formalization. I'll conclude with some open questions. Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8274Faster Algorithms for Half-Integral T-Path Packing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8275
Let G = (V, E) be an undirected graph, a subset of vertices T be a set of terminals. Then a natural combinatorial problem consists in finding the maximum number of vertex-disjoint paths connecting distinct terminals. For this problem, a clever construction suggested by Gallai reduces it to computing a maximum non-bipartite matching and thus gives an O(mn^1/2 log(n^2/m)/log(n))-time algorithm (hereinafter n := |V|, m := |E|).Now let us consider the fractional relaxation, i.e. allow T-path packings with arbitrary nonnegative real weights. It is known that there always exists a half-integral solution, that is, one only needs to assign weights 0, 1/2, 1 to maximize the total weight of T-paths. It is also known that an optimum half-integral packing can be found in strongly-polynomial time but the actual time bounds are far from being satisfactory.In this paper we present a novel algorithm that solves the half-integral problem within O(mn^1/2 log(n^2/m)/log(n)) time, thus matching the complexities of integral and half-integral versions.Mon, 04 Dec 2017 18:53:08 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8275LIPIcs, Volume 83, MFCS'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8207
LIPIcs, Volume 83, MFCS'17, Complete VolumeMon, 04 Dec 2017 08:31:50 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8207LIPIcs, Volume 67, ITCS'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8206
LIPIcs, Volume 67, ITCS'17, Complete VolumeMon, 04 Dec 2017 08:26:42 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8206LIPIcs, Volume 72, CALCO'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8205
LIPIcs, Volume 72, CALCO'17, Complete VolumeMon, 04 Dec 2017 08:21:27 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8205The Duality Gap for Two-Team Zero-Sum Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8142
We consider multiplayer games in which the players fall in two teams of size k, with payoffs equal within, and of opposite sign across, the two teams. In the classical case of k=1, such zero-sum games possess a unique value, independent of order of play, due to the von Neumann minimax theorem. However, this fails for all k>1; we can measure this failure by a duality gap, which quantifies the benefit of being the team to commit last to its strategy. In our main result we show that the gap equals 2(1-2^{1-k}) for m=2 and 2(1-\m^{-(1-o(1))k}) for m>2, with m being the size of the action space of each player.At a finer level, the cost to a team of individual players acting independently while the opposition employs joint randomness is 1-2^{1-k} for k=2, and 1-\m^{-(1-o(1))k} for m>2.This class of multiplayer games, apart from being a natural bridge between two-player zero-sum games and general multiplayer games, is motivated from Biology (the weak selection model of evolution) and Economics (players with shared utility but poor coordination).Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8142An Improved Homomorphism Preservation Theorem From Lower Bounds in Circuit Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8143
Previous work of the author [Rossmann'08] showed that the Homomorphism Preservation Theorem of classical model theory remains valid when its statement is restricted to finite structures. In this paper, we give a new proof of this result via a reduction to lower bounds in circuit complexity, specifically on the AC0 formula size of the colored subgraph isomorphism problem. Formally, we show the following: if a first-order sentence of quantifier-rank k is preserved under homomorphisms on finite structures, then it is equivalent on finite structures to an existential-positive sentence of quantifier-rank poly(k). Quantitatively, this improves the result of [Rossmann'08], where the upper bound on quantifier-rank is a non-elementary function of k.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8143Towards Hardness of Approximation for Polynomial Time Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8144
Proving hardness of approximation is a major challenge in the field of fine-grained complexity and conditional lower bounds in P.How well can the Longest Common Subsequence (LCS) or the Edit Distance be approximated by an algorithm that runs in near-linear time?In this paper, we make progress towards answering these questions.We introduce a framework that exhibits barriers for truly subquadratic and deterministic algorithms with good approximation guarantees.Our framework highlights a novel connection between deterministic approximation algorithms for natural problems in P and circuit lower bounds.In particular, we discover a curious connection of the following form:if there exists a \delta>0 such that for all \eps>0 there is a deterministic (1+\eps)-approximation algorithm for LCS on two sequences of length n over an alphabet of size n^{o(1)} that runs in O(n^{2-\delta}) time, then a certain plausible hypothesis is refuted, and the class E^NP does not have non-uniform linear size Valiant Series-Parallel circuits.Thus, designing a "truly subquadratic PTAS" for LCS is as hard as resolving an old open question in complexity theory.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8144Inferential Privacy Guarantees for Differentially Private Mechanisms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8145
The following is a summary of the paper "Inferential Privacy Guarantees for Differentially Private Mechanisms", presented at the Eighth Innovations in Theoretical Computer Science Conference in January 2017. The full version of the paper can be found on arXiv at the URL https://arxiv.org/abs/1603.01508.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8145Random Walks in Polytopes and Negative Dependence
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8146
We present a Gaussian random walk in a polytope that starts at a point inside and continues until it gets absorbed at a vertex. Our main result is that the probability distribution induced on thevertices by this random walk has strong negative dependence properties for matroid polytopes. Such distributions are highly sought after in randomized algorithms as they imply concentrationproperties. Our random walk is simple to implement, computationally efficient and can be viewed as an algorithm to round the starting point in an unbiased manner. The proof relies on a simpleinductive argument that synthesizes the combinatorial structure of matroid polytopes with the geometric structure of multivariate Gaussian distributions. Our result not only implies a longline of past results in a unified and transparent manner, but also implies new results about constructing negatively associated distributions for all matroids.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8146Parameterized Property Testing of Functions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8147
We investigate the parameters in terms of which the complexity of sublinear-time algorithms should be expressed. Our goal is to find input parameters that are tailored to the combinatorics of the specific problem being studied and design algorithms that run faster when these parameters are small. This direction enables us to surpass the (worst-case) lower bounds, expressed in terms of the input size, for several problems. Our aim is to develop a similar level of understanding of the complexity of sublinear-time algorithms to the one that was enabled by research in parameterized complexity for classical algorithms.Specifically, we focus on testing properties of functions. By parameterizing the query complexity in terms of the size r of the image of the input function, we obtain testers for monotonicity and convexity of functions of the form f:[n]\to \mathbb{R} with query complexity O(\log r), with no dependence on n. The result for monotonicity circumvents the \Omega(\log n) lower bound by Fischer (Inf. Comput., 2004) for this problem. We present several other parameterized testers, providing compelling evidence that expressing the query complexity of property testers in terms of the input size is not always the best choice.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8147Nash Social Welfare, Matrix Permanent, and Stable Polynomials
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8148
We study the problem of allocating m items to n agents subject to maximizing the Nash social welfare (NSW) objective. We write a novel convex programming relaxation for this problem, and we show that a simple randomized rounding algorithm gives a 1/e approximation factor of the objective, breaking the 1/2e^(1/e) approximation factor of Cole and Gkatzelis.Our main technical contribution is an extension of Gurvits's lower bound on the coefficient of the square-free monomial of a degree m-homogeneous stable polynomial on m variables to all homogeneous polynomials. We use this extension to analyze the expected welfare of the allocation returned by our randomized rounding algorithm.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8148Detecting communities is Hard (And Counting Them is Even Harder)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8149
We consider the algorithmic problem of community detection in networks. Given an undirected friendship graph G, a subsetS of vertices is an (a,b)-community if: * Every member of the community is friends with an (a)-fraction of the community; and* every non-member is friends with at most a (b)-fraction of thecommunity. [Arora, Ge, Sachdeva, Schoenebeck 2012] gave a quasi-polynomialtime algorithm for enumerating all the (a,b)-communitiesfor any constants a>b. Here, we prove that, assuming the Exponential Time Hypothesis (ETH),quasi-polynomial time is in fact necessary - and even for a much weakerapproximation desideratum. Namely, distinguishing between:* G contains an (1,o(1))-community; and* G does not contain a (b,b+o(1))-communityfor any b.We also prove that counting the number of (1,o(1))-communitiesrequires quasi-polynomial time assuming the weaker #ETH.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8149Algorithmic Aspects of Private Bayesian Persuasion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8150
We consider a multi-receivers Bayesian persuasion model where an informed sender tries to persuade a group of receivers to take a certain action. The state of nature is known to the sender, but it is unknown to the receivers. The sender is allowed to commit to a signaling policy where she sends a private signal to every receiver. This work studies the computation aspects of finding a signaling policy that maximizes the sender's revenue. We show that if the sender's utility is a submodular function of the set of receivers that take the desired action, then we can efficiently find a signaling policy whose revenue is at least (1-1/e) times the optimal. We also prove that approximating the sender's optimal revenue by a factor better than (1-1/e) is NP-hard and, hence, the developed approximation guarantee is essentially tight. When the sender's utility is a function of the number of receivers that take the desired action (i.e., the utility function is anonymous), we show that an optimal signaling policy can be computed in polynomial time. Our results are based on an interesting connection between the Bayesian persuasion problem and the evaluation of the concave closure of a set function. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8150Conditional Sparse Linear Regression
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8151
Machine learning and statistics typically focus on building models that capture the vast majority of the data, possibly ignoring a small subset of data as "noise" or "outliers." By contrast, here we consider the problem of jointly identifying a significant (but perhaps small) segment of a population in which there is a highly sparse linear regression fit, together with the coefficients for the linear fit. We contend that such tasks are of interest both because the models themselves may be able to achieve better predictions in such special cases, but also because they may aid our understanding of the data. We give algorithms for such problems under the sup norm, when this unknown segment of the population is described by a k-DNF condition and the regression fit is s-sparse for constant k and s. For the variants of this problem when the regression fit is not so sparse or using expected error, we also give a preliminary algorithm and highlight the question as a challenge for future work.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8151Constant-Rate Interactive Coding Is Impossible, Even In Constant-Degree Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8152
Multiparty interactive coding allows a network of n parties to perform distributed computations when the communication channels suffer from noise. Previous results (Rajagopalan and Schulman, STOC’94) obtained a multiparty interactive coding protocol, resilient to random noise, with a blowup of O(log(\Delta + 1)) for networks whose topology has a maximal degree \Delta. Vitally, the communication model in their work forces all the parties to send one message at every round of the protocol, even if they have nothing to send.We re-examine the question of multiparty interactive coding, lifting the requirement that forces all the parties to communicate at each and every round. We use the recently developed information-theoretic machinery of Braverman et al. (STOC ’16) to show that if the network’s topology is a cycle, then there is a specific “cycle task” for which any coding scheme has a communication blowup of \Omega(log n). This is quite surprising since the cycle has a maximal degree of \Delta = 2, implying a coding with a constant blowup when all parties are forced to speak at all rounds.We complement our lower bound with a matching coding scheme for the "cycle task" that has a communication blowup of \Omega(log n). This makes our lower bound for the cycle task tight.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8152A Hierarchy Theorem for Interactive Proofs of Proximity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8153
The number of rounds, or round complexity, used in an interactiveprotocol is a fundamental resource. In this work we consider thesignificance of round complexity in the context of InteractiveProofs of Proximity (IPPs). Roughly speaking, IPPs are interactive proofs in which the verifier runs in sublinear time and is only required to reject inputs that are far from the language.Our main result is a round hierarchy theorem for IPPs, showingthat the power of IPPs grows with the number of rounds. Morespecifically, we show that there exists a gap functiong(r) = Theta(r^2) such that for every constant r \geq 1 there exists a language that (1) has a g(r)-round IPP with verification time t=t(n,r) but (2) does not have an r-round IPP with verification time t (or even verification time t'=\poly(t)).In fact, we prove a stronger result by exhibiting a single language L such that, for every constant r \geq 1, there is anO(r^2)-round IPP for L with t=n^{O(1/r)} verification time, whereas the verifier in any r-round IPP for L must run in time at least t^{100}. Moreover, we show an IPP for L with a poly-logarithmic number of rounds and only poly-logarithmic erification time, yielding a sub-exponential separation between the power of constant-round IPPs versus general (unbounded round) IPPs.From our hierarchy theorem we also derive implications to standardinteractive proofs (in which the verifier can run in polynomialtime). Specifically, we show that the round reduction technique ofBabai and Moran (JCSS, 1988) is (almost) optimal among all blackbox transformations, and we show a connection to the algebrization framework of Aaronson and Wigderson (TOCT, 2009).Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8153Quantum Recommendation Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8154
A recommendation system uses the past purchases or ratings of n products by a group of m users, in order to provide personalized recommendations to individual users. The information is modeled as an m \times n preference matrix which is assumed to have a good rank-k approximation, for a small constant k. In this work, we present a quantum algorithm for recommendation systems that has running time O(\text{poly}(k)\text{polylog}(mn)). All known classical algorithms for recommendation systems that work through reconstructing an approximation of the preference matrix run in time polynomial in the matrix dimension. Our algorithm provides good recommendations by sampling efficiently from an approximation of the preference matrix, without reconstructing the entire matrix. For this, we design an efficient quantum procedure to project a given vector onto the row space of a given matrix. This is the first algorithm for recommendation systems that runs in time polylogarithmic in the dimensions of the matrix and provides an example of a quantum machine learning algorithm for a real world application. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8154Finding Clearing Payments in Financial Networks with Credit Default Swaps is PPAD-complete
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8155
We consider the problem of clearing a system of interconnected banks that have been exposed to a shock on their assets. Eisenberg and Noe (2001) showed that when banks can only enter into simple debt contracts with each other, then a clearing vector of payments can be computed in polynomial time. In this paper, we show that the situation changes radically when banks can also enter into credit default swaps (CDSs), i.e., financial derivative contracts that depend on the default of another bank. We prove that computing an approximate solution to the clearing problem with sufficiently small constant error is PPAD-complete. To do this, we demonstrate how financial networks with debt and CDSs can encode arithmetic operations such as addition and multiplication. Our results have practical impact for network stress tests and reveal computational complexity as a new concern regarding the stability of the financial system.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8155Inherent Trade-Offs in the Fair Determination of Risk Scores
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8156
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8156Multiplayer Parallel Repetition for Expanding Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8157
We investigate the value of parallel repetition of one-round games with any number of players k>=2. It has been an open question whether an analogue of Raz's Parallel Repetition Theorem holds for games with more than two players, i.e., whether the value of the repeated game decays exponentially with the number of repetitions. Verbitsky has shown, via a reduction to the density Hales-Jewett theorem, that the value of the repeated game must approach zero, as the number of repetitions increases. However, the rate of decay obtained in this way is extremely slow, and it is an open question whether the true rate is exponential as is the case for all two-player games.Exponential decay bounds are known for several special cases of multi-player games, e.g., free games and anchored games. In this work, we identify a certain expansion property of the base game and show all games with this property satisfy an exponential decay parallel repetition bound. Free games and anchored games satisfy this expansion property, and thus our parallel repetition theorem reproduces all earlier exponential-decay bounds for multiplayer games. More generally, our parallel repetition bound applies to all multiplayer games that are *connected* in a certain sense.We also describe a very simple game, called the GHZ game, that does not satisfy this connectivity property, and for which we do not know an exponential decay bound. We suspect that progress on bounding the value of this the parallel repetition of the GHZ game will lead to further progress on the general question.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8157Testing k-Monotonicity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8158
A Boolean k-monotone function defined over a finite poset domain D alternates between the values 0 and 1 at most k times on any ascending chain in D. Therefore, k-monotone functions are natural generalizations of the classical monotone functions, which are the 1-monotone functions.Motivated by the recent interest in k-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing, we initiate a systematic study of k-monotone functions, in the property testing model. In this model, the goal is to distinguish functions that are k-monotone (or are close to being k-monotone) from functions that are far from being k-monotone. Our results include the following:1. We demonstrate a separation between testing k-monotonicity and testing monotonicity, on the hypercube domain {0,1}^d, for k >= 3;2. We demonstrate a separation between testing and learning on {0,1}^d, for k=\omega(\log d): testing k-monotonicity can be performed with 2^{O(\sqrt d . \log d . \log{1/\eps})} queries, while learning k-monotone functions requires 2^{\Omega(k . \sqrt d .{1/\eps})} queries (Blais et al. (RANDOM 2015)).3. We present a tolerant test for functions f\colon[n]^d\to \{0,1\}$with complexity independent of n, which makes progress on a problem left open by Berman et al. (STOC 2014). Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques. Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid [n]^d, and draw connections to distribution testing techniques.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8158Coding in Undirected Graphs Is Either Very Helpful or Not Helpful at All
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8159
While it is known that using network coding can significantly improve the throughput of directed networks, it is a notorious open problem whether coding yields any advantage over the multicommodity flow (MCF) rate in undirected networks. It was conjectured that the answer is no. In this paper we show that even a small advantage over MCF can be amplified to yield a near-maximum possible gap. We prove that any undirected network with k source-sink pairs that exhibits a (1+epsilon) gap between its MCF rate and its network coding rate can be used to construct a family of graphs G' whose gap is log(|G'|)^c for some constant c < 1. The resulting gap is close to the best currently known upper bound, log(|G'|), which follows from the connection between MCF and sparsest cuts. Our construction relies on a gap-amplifying graph tensor product that, given two graphs G1,G2 with small gaps, creates another graph G with a gap that is equal to the product of the previous two, at the cost of increasing the size of the graph. We iterate this process to obtain a gap of log(|G'|)^c from any initial gap. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8159Condorcet-Consistent and Approximately Strategyproof Tournament Rules
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8160
We consider the manipulability of tournament rules for round-robin tournaments of n competitors. Specifically, n competitors are competing for a prize, and a tournament rule r maps the result of all n(n-1)/2 pairwise matches (called a tournament, T) to a distribution over winners. Rule r is Condorcet-consistent if whenever i wins all n-1 of her matches, r selects i with probability 1. We consider strategic manipulation of tournaments where player j might throw their match to player i in order to increase the likelihood that one of them wins the tournament. Regardless of the reason why j chooses to do this, the potential for manipulation exists as long as Pr[r(T) = i] increases by more than Pr[r(T) = j] decreases. Unfortunately, it is known that every Condorcet-consistent rule is manipulable. In this work, we address the question of how manipulable Condorcet-consistent rules must necessarily be - by trying to minimize the difference between the increase in Pr[r(T) = i] and decrease in Pr[r(T) = j] for any potential manipulating pair.We show that every Condorcet-consistent rule is in fact 1/3-manipulable, and that selecting a winner according to a random single elimination bracket is not alpha-manipulable for any alpha > 1/3. We also show that many previously studied tournament formats are all 1/2-manipulable, and the popular class of Copeland rules (any rule that selects a player with the most wins) are all in fact 1-manipulable, the worst possible. Finally, we consider extensions to match-fixing among sets of more than two players.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8160Testing Submodularity and Other Properties of Valuation Functions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8161
We show that for any constant \epsilon > 0 and p \ge 1, it is possible to distinguish functions f : \{0,1\}^n \to [0,1] that are submodular from those that are \epsilon-far from every submodular function in \ell_p distance with a constant number of queries. More generally, we extend the testing-by-implicit-learning framework of Diakonikolas et al.(2007) to show that every property of real-valued functions that is well-approximated in \ell_2 distance by a class of k-juntas for some k = O(1) can be tested in the \ell_p-testing model with a constant number of queries. This result, combined with a recent junta theorem of Feldman and \Vondrak (2016), yields the constant-query testability of submodularity. It also yields constant-query testing algorithms for a variety of other natural properties of valuation functions, including fractionally additive (XOS) functions, OXS functions, unit demand functions, coverage functions, and self-bounding functions.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8161The Journey from NP to TFNP Hardness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8162
The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist.Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption).Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP?In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8162Well-Supported vs. Approximate Nash Equilibria: Query Complexity of Large Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8163
In this paper we present a generic reduction from the problem of finding an \epsilon-well-supported Nash equilibrium (WSNE) to that of finding an \Theta(\epsilon)-approximate Nash equilibrium (ANE), in large games with n players and a bounded number of strategies for each player.Our reduction complements the existing literature on relations between WSNE and ANE, and can be applied to extend hardness results on WSNE to similar results on ANE.This allows one to focus on WSNE first, which is in general easier to analyze and control in hardness constructions.As an application we prove a 2^{\Omega(n/\log n)} lower bound on the randomized query complexity of finding an \epsilon-ANE in binary-action n-player games, for some constant \epsilon>0.This answers an open problem posed by Hart and Nisan and Babichenko, and is very close to the trivial upper bound of 2^n.Previously for WSNE, Babichenko showed a 2^{\Omega(n)} lower bound on the randomized query complexity of finding an \epsilon-WSNE for some constant epsilon>0.Our result follows directly from combining Babichenko's result and our new reduction from WSNE to ANE.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8163Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8164
Given a twice continuously differentiable cost function f, we prove that the set of initial conditions so that gradient descent converges to saddle points where \nabla^2 f has at least one strictly negative eigenvalue, has (Lebesgue) measure zero, even for cost functions f with non-isolated critical points, answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT 2016]. Moreover, this result extends to forward-invariant convex subspaces, allowing for weak (non-globally Lipschitz) smoothness assumptions. Finally, we produce an upper bound on the allowable step-size.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8164Mutation, Sexual Reproduction and Survival in Dynamic Environments
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8165
A new approach to understanding evolution [Valiant, JACM 2009], namely viewing it through the lens of computation,has already started yielding new insights, e.g., natural selection under sexual reproduction can be interpretedas the Multiplicative Weight Update (MWU) Algorithm in coordination games played among genes [Chastain, Livnat, Papadimitriou, Vazirani, PNAS 2014]. Using this machinery, we study the role of mutation in changing environments in the presence of sexual reproduction. Following [Wolf, Vazirani, Arkin, J. Theor. Biology], we model changing environments via a Markov chain, with the states representing environments, each with its own fitness matrix. In this setting, we show that in the absence of mutation, the population goes extinct, but in the presence of mutation, the population survives with positive probability.On the way to proving the above theorem, we need to establish some facts about dynamics in games. We provide the first, to our knowledge, polynomial convergence bound for noisy MWU in a coordination game. Finally, we also show that in static environments, sexual evolution with mutation converges, for any level of mutation.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8165Constructive Non-Commutative Rank Computation Is in Deterministic Polynomial Time
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8166
Let {\mathcal B} be a linear space of matrices over a field {\mathbb spanned by n\times n matrices B_1, \dots, B_m. The non-commutative rank of {\mathcal B}$ is the minimum r\in {\mathbb N} such that there exists U\leq {\mathbb F}^n satisfying \dim(U)-\dim( {\mathcal B} (U))\geq n-r, where {\mathcal B}(U):={\mathrm span}(\cup_{i\in[m]} B_i(U)). Computing the non-commutative rank generalizes some well-known problems including the bipartite graph maximum matching problem and the linear matroid intersection problem. In this paper we give a deterministic polynomial-time algorithm to compute the non-commutative rank over any field {\mathbb F}. Prior to our work, such an algorithm was only known over the rational number field {\mathbb Q}, a result due to Garg et al, [GGOW]. Our algorithm is constructive and produces a witnesscertifying the non-commutative rank, a feature that is missing in the algorithm from [GGOW].Our result is built on techniques which we developed in a previous paper [IQS1], with a new reduction procedure that helps to keep the blow-up parameter small. There are two ways to realize this reduction. The first involves constructivizing a key resultof Derksen and Makam [DM2] which they developed in order to prove that the null coneof matrix semi-invariants is cut out by generators whose degree is polynomial in the size of the matrices involved. We also give a second, simpler method to achieve this. Thisgives another proof of the polynomial upper bound on the degree of the generators cutting out the null cone of matrix semi-invariants.Both the invariant-theoretic result and the algorithmic result rely crucially on the regularity lemma proved in [IQS1]. In this paper we improve on the constructive version of the regularity lemma from [IQS1] by removing a technical coprime condition that was assumed there.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8166Parallel Repetition via Fortification: Analytic View and the Quantum Case
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8167
In a recent work, Moshkovitz [FOCS'14] presented a transformation n two-player games called "fortification", and gave an elementary proof of an (exponential decay) parallel repetition theorem for fortified two-player projection games. In this paper, we give an analytic reformulation of Moshkovitz's fortification framework, which was originally cast in combinatorial terms. This reformulation allows us to expand the scope of the fortification method to new settings.First, we show any game (not just projection games) can be fortified, and give a simple proof of parallel repetition for general fortified games. Then, we prove parallel repetition and fortification theorems for games with players sharing quantum entanglement, as well as games with more than two players. This gives a new gap amplification method for general games in the quantum and multiplayer settings, which has recently received much interest.An important component of our work is a variant of the fortification transformation, called "ordered fortification", that preserves the entangled value of a game. The original fortification of Moshkovitz does not in general preserve the entangled value of a game, and this was a barrier to extending the fortification framework to the quantum setting.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8167The Distortion of Locality Sensitive Hashing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8168
Given a pairwise similarity notion between objects, locality sensitive hashing (LSH) aims to construct a hash function family over the universe of objects such that the probability two objects hash to the same value is their similarity. LSH is a powerful algorithmic tool for large-scale applications and much work has been done to understand LSHable similarities, i.e., similarities that admit an LSH.In this paper we focus on similarities that are provably non-LSHable and propose a notion of distortion to capture the approximation of such a similarity by a similarity that is LSHable. We consider several well-known non-LSHable similarities and show tight upper and lower bounds on their distortion. We also experimentally show that our upper bounds translate to eFri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8168Approximating Approximate Distance Oracles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8169
Given a finite metric space (V,d), an approximate distance oracle is a data structure which, when queried on two points u,v \in V, returns an approximation to the the actual distance between u and v which is within some bounded stretch factor of the true distance. There has been significant work on the tradeoff between the important parameters of approximate distance oracles (and in particular between the size, stretch, and query time), but in this paper we take a different point of view, that of per-instance optimization. If we are given an particular input metric space and stretch bound, can we find the smallest possible approximate distance oracle for that particular input? Since this question is not even well-defined, we restrict our attention to well-known classes of approximate distance oracles, and study whether we can optimize over those classes. In particular, we give an O(\log n)-approximation to the problem of finding the smallest stretch 3 Thorup-Zwick distance oracle, as well as the problem of finding the smallest P\v{a}tra\c{s}cu-Roditty distance oracle. We also prove a matching \Omega(\log n) lower bound for both problems, and an \Omega(n^{\frac{1}{k}-\frac{1}{2^{k-1}}}) integrality gap for the more general stretch (2k-1) Thorup-Zwick distance oracle. We also consider the problem of approximating the best TZ or PR approximate distance oracle with outliers, and show that more advanced techniques (SDP relaxations in particular) allow us to optimize even in the presence of outliers. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8169Quantum Codes from High-Dimensional Manifolds
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8170
We construct toric codes on various high-dimensional manifolds. Assuming a conjecture in geometry we find families of quantum CSS stabilizer codes on N qubits with logarithmic weight stabilizers and distance N^{1-\epsilon} for any \epsilon>0.The conjecture is that there is a constant C>0 such that for any n-dimensional torus {\mathbb T}^n={\mathbb R}^n/\Lambda, where \Lambda is a lattice, the least volume unoriented n/2-dimensional cycle (using the Euclidean metric) representing nontrivial homology has volume at least C^n times the volume of the least volume n/2-dimensional hyperplane representing nontrivial homology; in fact, it would suffice to have this result for \Lambda an integral lattice with the cycle restricted to faces of a cubulation by unit hypercubes.The main technical result is an estimate of Rankin invariants for certain random lattices, showing that in a certain sense they are optimal.Additionally, we construct codes with square-root distance, logarithmic weight stabilizers, and inverse polylogarithmic soundness factor (considered as quantum locally testable codes.We also provide an short, alternative proof that the shortest vector in the exterior power of a lattice may be non-split.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8170Self-Sustaining Iterated Learning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8171
An important result from psycholinguistics (Griffiths & Kalish, 2005) states that no language can be learned iteratively by rational agents in a self-sustaining manner. We show how to modify the learning process slightly in order to achieve self-sustainability. Our work is in two parts. First, we characterize iterated learnability in geometric terms and show how a slight, steady increase in the lengths of the training sessions ensures self-sustainability for any discrete language class. In the second part, we tackle the nondiscrete case and investigate self-sustainability for iterated linear regression. We discuss the implications of our findings to issues of non-equilibrium dynamics in natural algorithms.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8171What Circuit Classes Can Be Learned with Non-Trivial Savings?
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8172
Despite decades of intensive research, efficient - or even sub-exponential time - distribution-free PAC learning algorithms are not known for many important Boolean function classes. In this work we suggest a new perspective on these learning problems, inspired by a surge of recent research in complexity theory, in which the goal is to determine whether and how much of a savings over a naive 2^n runtime can be achieved.We establish a range of exploratory results towards this end. In more detail, (1) We first observe that a simple approach building on known uniform-distribution learning results gives non-trivial distribution-free learning algorithms for several well-studied classes including AC0, arbitrary functions of a few linear threshold functions (LTFs), and AC0 augmented with mod_p gates.(2) Next we present an approach, based on the method of random restrictions from circuit complexity, which can be used to obtain several distribution-free learning algorithms that do not appear to be achievable by approach (1) above. The results achieved in this way include learning algorithms with non-trivial savings for LTF-of-AC0 circuits and improved savings for learning parity-of-AC0 circuits. (3) Finally, our third contribution is a generic technique for converting lower bounds proved using Neciporuk's method to learning algorithms with non-trivial savings. This technique, which is the most involved of our three approaches, yields distribution-free learning algorithms for a range of classes where previously even non-trivial uniform-distribution learning algorithms were not known; these classes include full-basis formulas, branching programs, span programs, etc. up to some fixed polynomial size. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8172The Classification of Reversible Bit Operations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8173
We present a complete classification of all possible sets of classical reversible gates acting on bits, in terms of which reversible transformations they generate, assuming swaps and ancilla bits are available for free. Our classification can be seen as the reversible-computing analogue of Post's lattice, a central result in mathematical logic from the 1940s. It is a step toward the ambitious goal of classifying all possible quantum gate sets acting on qubits.Our theorem implies a linear-time algorithm (which we have implemented), that takes as input the truth tables of reversible gates G and H, and that decides whether G generates H. Previously, this problem was not even known to be decidable (though with effort, one can derive from abstract considerations an algorithm that takes triply-exponential time). The theorem also implies that any n-bit reversible circuit can be "compressed" to an equivalent circuit, over the same gates, that uses at most 2^{n}poly(n) gates and O(1) ancilla bits; these are the first upper bounds on these quantities known, and are close to optimal. Finally, the theorem implies that every non-degenerate reversible gate can implement either every reversible transformation, or every affine transformation, when restricted to an "encoded subspace."Briefly, the theorem says that every set of reversible gates generates either all reversible transformations on n-bit strings (as the Toffoli gate does); no transformations; all transformations that preserve Hamming weight (as the Fredkin gate does); all transformations that preserve Hamming weight mod k for some k; all affine transformations (as the Controlled-NOT gate does); all affine transformations that preserve Hamming weight mod 2 or mod 4, inner products mod 2, or a combination thereof; or a previous class augmented by a NOT or NOTNOT gate. Prior to this work, it was not even known that every class was finitely generated. Ruling out the possibility of additional classes, not in the list, requires involved arguments about polynomials, lattices, and Diophantine equations.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8173Cube vs. Cube Low Degree Test
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8174
We revisit the Raz-Safra plane-vs.-plane test and study the closely related cube vs. cube test. In this test the tester has access to a "cubes table" which assigns to every cube a low degree polynomial. The tester randomly selects two cubes (affine sub-spaces of dimension 3) that intersect on a point x in F^m, and checks that the assignments to the cubes agree with each other on the point x. Our main result is a new combinatorial proof for a low degree test that comes closer to the soundness limit, as it works for all epsilon >= poly(d)/{|F|}^{1/2}, where d is the degree. This should be compared to the previously best soundness value of epsilon >= poly(m, d)/|F|^{1/8}. Our soundness limit improves upon the dependence on the field size and does not depend on the dimension of the ambient space.Our proof is combinatorial and direct: unlike the Raz-Safra proof, it proceeds in one shot and does not require induction on the dimension of the ambient space. The ideas in our proof come from works on direct product testing which are even simpler in the current setting thanks to the low degree.Along the way we also prove a somewhat surprising fact about connection between different agreement tests: it does not matter if the tester chooses the cubes to intersect on points or on lines: for every given table, its success probability in either test is nearly the same.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8174The Complexity of Problems in P Given Correlated Instances
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8175
Instances of computational problems do not exist in isolation. Rather, multiple and correlated instances of the same problem arise naturally in the real world. The challenge is how to gain computationally from correlations when they can be found. [DGH, ITCS 2015] showed that significant computational gains can be made by having access to auxiliary instances which are correlated to the primary problem instance via the solution space. They demonstrate this for constraint satisfaction problems, which are NP-hard in the general worst case form.Here, we set out to study the impact of having access to correlated instances on the complexity of polynomial time problems. Namely, for a problem P that is conjectured to require time n^c for c>0, we ask whether access to a few instances of P that are correlated in some natural way can be used to solve P on one of them (the designated "primary instance") faster than the conjectured lower bound of n^c. We focus our attention on a number of problems: the Longest Common Subsequence (LCS), the minimum Edit Distance between sequences, and Dynamic Time Warping Distance (DTWD) of curves, for all of which the best known algorithms achieve O(n^2/polylog(n)) runtime via dynamic programming. These problems form an interesting case in point to study, as it has been shown that a O(n^(2 - epsilon)) time algorithm for a worst-case instance would imply improved algorithms for a host of other problems as well as disprove complexity hypotheses such as the Strong Exponential Time Hypothesis.We show how to use access to a logarithmic number of auxiliary correlated instances, to design novel o(n^2) time algorithms for LCS, EDIT, DTWD, and more generally improved algorithms for computing any tuple-based similarity measure - a generalization which we define within on strings. For the multiple sequence alignment problem on k strings, this yields an O(nk\log n) algorithm contrasting with classical O(n^k) dynamic programming. Our results hold for several correlation models between the primary and the auxiliary instances. In the most general correlation model we address, we assume that the primary instance is a worst-case instance and the auxiliary instances are chosen with uniform distribution subject to the constraint that their alignments areepsilon-correlated with the optimal alignment of the primary instance. We emphasize that optimal solutions for the auxiliary instances will not generally coincide with optimal solutions for the worst case primary instance.We view our work as pointing out a new avenue for looking for significant improvements for sequence alignment problems andcomputing similarity measures, by taking advantage of access to sequences which are correlated through natural generating processes. In this first work we show how to take advantage of mathematically inspired simple clean models of correlation - the intriguing question, looking forward, is to find correlation models which coincide with evolutionary models and other relationships and for which our approach to multiple sequence alignment gives provable guarantees.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8175Compression in a Distributed Setting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8176
Motivated by an attempt to understand the formation and development of (human) language, we introduce a "distributed compression" problem. In our problem a sequence of pairs of players from a set of K players are chosen and tasked to communicate messages drawn from an unknown distribution Q. Arguably languages are created and evolve to compress frequently occurring messages, and we focus on this aspect.The only knowledge that players have about the distribution Q is from previously drawn samples, but these samples differ from player to player.The only common knowledge between the players is restricted to a common prior distribution P and some constant numberof bits of information (such as a learning algorithm). Letting T_epsilon denote the number of iterations it would take for a typical playerto obtain an epsilon-approximation to Q in total variation distance, we askwhether T_epsilon iterations suffice to compress the messages down roughly to theirentropy and give a partial positive answer.We show that a natural uniform algorithm can compress the communication down to an average cost permessage of O(H(Q) + log (D(P || Q)) in tilde{O}(T_epsilon) iterationswhile allowing for O(epsilon)-error,where D(. || .) denotes the KL-divergence between distributions.For large divergencesthis compares favorably with the static algorithm that ignores all samples andcompresses down to H(Q) + D(P || Q) bits, while not requiring T_epsilon * K iterations that it would take players to develop optimal but separate compressions for each pair of players.Along the way we introduce a "data-structural" view of the task ofcommunicating with a natural language and show that our natural algorithm can also beimplemented by an efficient data structure, whose storage is comparable to the storage requirements of Q and whose query complexity is comparable to the lengths of the message to becompressed.Our results give a plausible mathematical analogy to the mechanisms by whichhuman languages get created and evolve, and in particular highlights thepossibility of coordination towards a joint task (agreeing on a language)while engaging in distributed learning.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8176Multi-Clique-Width
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8177
Multi-clique-width is obtained by a simple modification in the definition of clique-width. It has the advantage of providing a natural extension of tree-width. Unlike clique-width, it does not explode exponentially compared to tree-width. Efficient algorithms based on multi-clique-width are still possible for interesting tasks like computing the independent set polynomial or testing c-colorability. In particular, c-colorability can be tested in time linear in n and singly exponential in c and the width k of a given multi-k-expression. For these tasks, the running time as a function of the multi-clique-width is the same as the running time of the fastest known algorithm as a function of the clique-width. This results in an exponential speed-up for some graphs, if the corresponding graph generating expressions are given. The reason is that the multi-clique-width is never bigger, but is exponentially smaller than the clique-width for many graphs. This gap shows up when the tree-width is basically equal to the multi-clique width as well as when the tree-width is not bounded by any function of the clique-width.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8177Conditional Hardness for Sensitivity Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8178
In recent years it has become popular to study dynamic problems in a sensitivity setting: Instead of allowing for an arbitrary sequence of updates, the sensitivity model only allows to apply batch updates of small size to the original input data. The sensitivity model is particularly appealing since recent strong conditional lower bounds ruled out fast algorithms for many dynamic problems, such as shortest paths, reachability, or subgraph connectivity.In this paper we prove conditional lower bounds for these and additional problems in a sensitivity setting. For example, we show that under the Boolean Matrix Multiplication (BMM) conjecture combinatorial algorithms cannot compute the (4/3-\varepsilon)-approximate diameter of an undirected unweighted dense graph with truly subcubic preprocessing time and truly subquadratic update/query time. This result is surprising since in the static setting it is not clear whether a reduction from BMM to diameter is possible. We further show under the BMM conjecture that many problems, such as reachability or approximate shortest paths, cannot be solved faster than by recomputation from scratch even after only one or two edge insertions. We extend our reduction from BMM to Diameter to give a reduction from All Pairs Shortest Paths to Diameter under one deletion in weighted graphs. This is intriguing, as in the static setting it is a big open problem whether Diameter is as hard as APSP. We further get a nearly tight lower bound for shortest paths after two edge deletions based on the APSP conjecture. We give more lower bounds under the Strong Exponential Time Hypothesis. Many of our lower bounds also hold for static oracle data structures where no sensitivity is required.Finally, we give the first algorithm for the (1+\varepsilon)-approximate radius, diameter, and eccentricity problems in directed or undirected unweighted graphs in case of single edges failures. The algorithm has a truly subcubic running time for graphs with a truly subquadratic number of edges; it is tight w.r.t. the conditional lower bounds we obtain.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8178Non-Backtracking Spectrum of Degree-Corrected Stochastic Block Models
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8179
Motivated by community detection, we characterise the spectrum of the non-backtracking matrix B in the Degree-Corrected Stochastic Block Model.Specifically, we consider a random graph on n vertices partitioned into two asymptotically equal-sized clusters. The vertices have i.i.d. weights {\phi_u}_{u=1}^n with second moment \PHItwo. The intra-cluster connection probability for vertices u and v is \frac{\phi_u \phi_v}{n}a and the inter-cluster connection probability is \frac{\phi_u \phi_v}{n}b. We show that with high probability, the following holds: The leading eigenvalue of the non-backtracking matrix B is asymptotic to \rho = \frac{a+b}{2} \PHItwo. The second eigenvalue is asymptotic to \mu_2 = \frac{a-b}{2} \PHItwo when \mu_2^2 > \rho, but asymptotically bounded by \sqrt{\rho} when \mu_2^2 \leq \rho. All the remaining eigenvalues are asymptotically bounded by \sqrt{\rho}. As a result, a clustering positively-correlated with the true communities can be obtained based on the second eigenvector of B in the regime where \mu_2^2 > \rho.In a previous work we obtained that detection is impossible when $\mu_2^2 \leq \rho,$ meaning that there occurs a phase-transition in the sparse regime of the Degree-Corrected Stochastic Block Model. As a corollary, we obtain that Degree-Corrected Erdös-Rényi graphs asymptotically satisfy the graph Riemann hypothesis, a quasi-Ramanujan property.A by-product of our proof is a weak law of large numbers for local-functionals on Degree-Corrected Stochastic Block Models, which could be of independent interest.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8179On the Power of Learning from k-Wise Queries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8180
Several well-studied models of access to data samples, including statistical queries, local differential privacy and low-communication algorithms rely on queries that provide information about a function of a single sample. (For example, a statistical query (SQ) gives an estimate of Ex_{x ~ D}[q(x)] for any choice of the query function q mapping X to the reals, where D isan unknown data distribution over X.) Yet some data analysis algorithms rely on properties of functions that depend on multiple samples. Such algorithms would be naturally implemented using k-wise queries each of which is specified by a function q mapping X^k to the reals. Hence it is natural to ask whether algorithms using k-wise queries can solve learning problems more efficiently and by how much.Blum, Kalai and Wasserman (2003) showed that for any weak PAC learning problem over a fixed distribution, the complexity of learning with k-wise SQs is smaller than the (unary) SQ complexity by a factor of at most 2^k. We show that for more general problems over distributions the picture is substantially richer. For every k, the complexity of distribution-independent PAC learning with k-wise queries can be exponentially larger than learning with (k+1)-wise queries. We then give two approaches for simulating a k-wise query using unary queries. The first approach exploits the structure of theproblem that needs to be solved. It generalizes and strengthens (exponentially)the results of Blum et al.. It allows us to derive strong lower bounds forlearning DNF formulas and stochastic constraint satisfaction problems that holdagainst algorithms using k-wise queries. The second approach exploits thek-party communication complexity of the k-wise query function.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8180Nondeterministic Quantum Communication Complexity: the Cyclic Equality Game and Iterated Matrix Multiplication
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8181
We study nondeterministic multiparty quantum communication with a quantum generalization of broadcasts. We show that, with number-in-hand classical inputs, the communication complexity of a Boolean function in this communication model equals the logarithm of the support rank of the corresponding tensor, whereas the approximation complexity in this model equals the logarithm of the border support rank. This characterisation allows us to prove a log-rank conjecture posed by Villagra et al. for nondeterministic multiparty quantum communication with message passing.The support rank characterization of the communication model connects quantum communication complexity intimately to the theory of asymptotic entanglement transformation and algebraic complexity theory. In this context, we introduce the graphwise equality problem. For a cycle graph, the complexity of this communication problem is closely related to the complexity of the computational problem of multiplying matrices, or more precisely, it equals the logarithm of the support rank of the iterated matrix multiplication tensor. We employ Strassen’s laser method to show that asymptotically there exist nontrivial protocols for every odd-player cyclic equality problem. We exhibit an efficient protocol for the 5-player problem for small inputs, and we show how Young flattenings yield nontrivial complexity lower bounds.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8181Overlapping Qubits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8182
An ideal system of n qubits has 2^n dimensions. This exponential grants power, but also hinders characterizing the system's state and dynamics. We study a new problem: the qubits in a physical system might not be independent. They can "overlap," in the sense that an operation on one qubit slightly affects the others. We show that allowing for slight overlaps, n qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be <= epsilon in n^{O(1/epsilon^2)} dimensions.) Thus, even before considering issues like noise, a real system of n qubits might inherently lack any potential for exponential power. On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8182High Dimensional Random Walks and Colorful Expansion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8183
Random walks on bounded degree expander graphs have numerous applications, both in theoretical and practical computational problems. A key property of these walks is that they converge rapidly to their stationary distribution.In this work we define high order random walks: These are generalizations of random walks on graphs to high dimensional simplicial complexes, which are the high dimensional analogues of graphs. A simplicial complex of dimension d has vertices, edges, triangles, pyramids, up to d-dimensional cells. For any 0 \leq i < d, a high order random walk on dimension i moves between neighboring i-faces (e.g., edges) of the complex, where two i-faces are considered neighbors if they share a common (i+1)-face (e.g., a triangle). The case of i=0 recovers the well studied random walk on graphs.We provide a local-to-global criterion on a complex which implies rapid convergence of all high order random walks on it. Specifically, we prove that if the 1-dimensional skeletons of all the links of a complex are spectral expanders, then for all 0 \le i < d the high order random walk on dimension i converges rapidly to its stationary distribution.We derive our result through a new notion of high dimensional combinatorial expansion of complexes which we term colorful expansion. This notion is a natural generalization of combinatorial expansion of graphs and is strongly related to the convergence rate of the high order random walks.We further show an explicit family of bounded degree complexes which satisfy this criterion. Specifically, we show that Ramanujan complexes meet this criterion, and thus form an explicit family of bounded degree high dimensional simplicial complexes in which all of the high order random walks converge rapidly to their stationary distribution.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8183Towards Human Computable Passwords
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8184
An interesting challenge for the cryptography community is to design authentication protocols that are so simple that a human can execute them without relying on a fully trusted computer. We propose several candidate authentication protocols for a setting in which the human user can only receive assistance from a semi-trusted computer - a computer that stores information and performs computations correctly but does not provide confidentiality. Our schemes use a semi-trusted computer to store and display public challenges C_i\in[n]^k. The human user memorizes a random secret mapping \sigma:[n]\rightarrow \mathbb{Z}_d and authenticates by computing responses f(\sigma(C_i)) to a sequence of public challenges where f:\mathbb{Z}_d^k\rightarrow \mathbb{Z}_d is a function that is easy for the human to evaluate. We prove that any statistical adversary needs to sample m=\tilde{\Omega}\paren{n^{s(f)}} challenge-response pairs to recover \sigma, for a security parameter s(f) that depends on two key properties of f. Our lower bound generalizes recent results of Feldman et al. [Feldman'15]who proved analogous results for the special case d=2. To obtain our results, we apply the general hypercontractivity theorem [O'Donnell'14]to lower bound the statistical dimension of the distribution over challenge-response pairs induced by f and \sigma. Our statistical dimension lower bounds apply to arbitrary functions f:\mathbb{Z}_d^k\rightarrow \mathbb{Z}_d (not just to functions that are easy for a human to evaluate). As an application, we propose a family of human computable password functions f_{k_1,k_2} in which the user needs to perform 2k_1+2k_2+1 primitive operations (e.g., adding two digits or remembering a secret value \sigma(i)), and we show that s(f) = \min{k_1+1, (k_2+1)/2}. For these schemes, we prove that forging passwords is equivalent to recovering the secret mapping. Thus, our human computable password schemes can maintain strong security guarantees even after an adversary has observed the user login to many different accounts. Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8184Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8185
First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress.We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by "linearly coupling" the two. We show how to reconstruct Nesterov's accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov's original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov's methods cannot apply to.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8185Low-Sensitivity Functions from Unambiguous Certificates
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8186
We provide new query complexity separations against sensitivity for total Boolean functions: a power 3 separation between deterministic (and even randomized or quantum) query complexity and sensitivity, and a power 2.22 separation between certificate complexity and sensitivity. We get these separations by using a new connection between sensitivity and a seemingly unrelated measure called one-sided unambiguous certificate complexity. We also show that one-sided unambiguous certificate complexity is lower-bounded by fractional block sensitivity, which means we cannot use these techniques to get a super-quadratic separation between block sensitivity and sensitivity. Along the way, we give a power 1.22 separation between certificate complexity and one-sided unambiguous certificate complexity, improving the power 1.128 separation due to Goos [FOCS 2015]. As a consequence, we obtain an improved lower-bound on the co-nondeterministic communication complexity of the Clique vs. Independent Set problem.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8186Expander Construction in VNC1
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8187
We give a combinatorial analysis (using edge expansion) of a variant of the iterative expander construction due to Reingold, Vadhan, and Wigderson (2002), and show that this analysis can be formalized in the bounded arithmetic system VNC^1 (corresponding to the "NC^1 reasoning"). As a corollary, we prove the assumption made by Jerabek (2011) that a construction of certain bipartite expander graphs can be formalized in VNC^1. This in turn implies that every proof in Gentzen's sequent calculus LK of a monotone sequent can be simulated in the monotone version of LK (MLK) with only polynomial blowup in proof size, strengthening the quasipolynomial simulation result of Atserias, Galesi, and Pudlak (2002).Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8187Outlaw Distributions and Locally Decodable Codes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8188
Locally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In this work, we give a new characterization of LDCs using distributions over Boolean functions whose expectation is hard to approximate (in L_\infty norm) with a small number of samples. We coin the term 'outlaw distributions' for such distributions since they 'defy' the Law of Large Numbers. We show that the existence of outlaw distributions over sufficiently 'smooth' functions implies the existence of constant query LDCs and vice versa. We give several candidates for outlaw distributions over smooth functions coming from finite field incidence geometry and from hypergraph (non)expanders.We also prove a useful lemma showing that (smooth) LDCs which are only required to work on average over a random message and a random message index can be turned into true LDCs at the cost of only constant factors in the parameters.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8188The Flow of Information in Interactive Quantum Protocols: the Cost of Forgetting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8189
In two-party interactive quantum communication protocols,we study a recently defined notion of quantum information cost (QIC), which has most of the important properties of its classical analogue (IC). Notably, its link with amortized quantum communication complexity has been used to prove an (almost) tight lower bound on the bounded round quantum complexity of Disjointness.However, QIC was defined through a purification of the input state. This is valid for fully quantum inputs and tasks but difficult to interpret even for classical tasks.Also, its link with other notions of information cost that had appeared in the literature was not clear. We settle both these issues: for quantum communication with classical inputs, we characterize QIC in terms of information about the input registers, avoiding any reference to the notion of a purification of the classical input state. We provide an operational interpretation of this new characterization as the sum of the costs of revealing and of forgetting information about the inputs. To obtain this result, we prove a general Information Flow Lemma assessing the transfer of information in general interactive quantum processes. Specializing this lemma to interactive quantum protocols accomplishing classical tasks, we are able to demistify the link between QIC and other previous notions of information cost in quantum protocols. Furthermore, we clarify the link between QIC and IC by simulating quantumly classical protocols.Finally, we apply these concepts to argue that any quantum protocol that does not forget information solves Disjointness on n-bits in Omega(n) communication, completely losing the quadratic quantum speedup. Hence forgetting information is here a necessary feature in order to obtain any significant improvement over classical protocols. We also prove that QIC at 0-erroris exactly n for Inner Product, and n (1 - o(1)) for a random Boolean function on n+n bits.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8189Low-Complexity Cryptographic Hash Functions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8190
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function.The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8190Cumulative Space in Black-White Pebbling and Resolution
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8191
We study space complexity and time-space trade-offs with a focus not on peak memory usage but on overall memory consumption throughout the computation. Such a cumulative space measure was introduced for the computational model of parallel black pebbling by [Alwen and Serbinenko 2015] as a tool for obtaining results in cryptography. We consider instead the nondeterministic black-white pebble game and prove optimal cumulative space lower bounds and trade-offs, where in order to minimize pebbling time the space has to remain large during a significant fraction of the pebbling.We also initiate the study of cumulative space in proof complexity, an area where other space complexity measures have been extensively studied during the last 10-15 years. Using and extending the connection between proof complexity and pebble games in [Ben-Sasson and Nordström 2008, 2011], we obtain several strong cumulative space results for (even parallel versions of) the resolution proof system, and outline some possible future directions of study of this, in our opinion, natural and interesting space measure.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8191Rigorous Rg Algorithms and Area Laws for Low Energy Eigenstates In 1D
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8192
One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance by Landau et al. gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unresolved, including whether there exist rigorous efficient algorithms when the ground space is degenerate (and poly(n) dimensional), or for the poly(n) lowest energy states for 1D systems, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm for finding low energy states for 1D systems, based on a rigorously justified renormalization group (RG)-type transformation. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly(n) degenerate ground spaces and an n^{O(\log n)} algorithm for the poly(n) lowest energy states for 1D systems (under a mild density condition). We note that for these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is O(nM(n)), where M(n) is the time required to multiply two n by n matrices.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8192Fast Cross-Polytope Locality-Sensitive Hashing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8193
We provide a variant of cross-polytope locality sensitive hashing with respect to angular distance which is provably optimal in asymptotic sensitivity and enjoys \mathcal{O}(d \ln d ) hash computation time. Building on a recent result in (Andoni, Indyk, Laarhoven, Razenshteyn '15), we show that optimal asymptotic sensitivity for cross-polytope LSH is retained even when the dense Gaussian matrix is replaced by a fast Johnson-Lindenstrauss transform followed by discrete pseudo-rotation, reducing the hash computation time from \mathcal{O}(d^2) to \mathcal{O}(d \ln d ). Moreover, our scheme achieves the optimal rate of convergence for sensitivity. By incorporating a low-randomness Johnson-Lindenstrauss transform, our scheme can be modified to require only \mathcal{O}(\ln^9(d)) random bits.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8193Metatheorems for Dynamic Weighted Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8194
We consider the maximum weight matching (MWM) problem in dynamic graphs. We provide two reductions. The first reduces the dynamic MWM problem on m-edge, n-node graphs with weights bounded by N to the problem with weights bounded by (n/eps)^2, so that if the MWM problem can be alpha-approximated with update time t(m,n,N), then it can also be (1+eps)alpha-approximated with update time O(t(m,n,(n/eps)^2)log^2 n+log n loglog N)). The second reduction reduces the dynamic MWM problem to the dynamic maximum cardinality matching (MCM) problem in which the graph is unweighted. This reduction shows that if there is an \alpha-approximation algorithm for MCM with update time t(m,n) in m-edge n-node graphs, then there is also a (2+eps)alpha-approximation algorithm for MWM with update time O(t(m,n)eps^{-2}log^2 N). We also obtain better bounds in our reductions if the ratio between the largest and the smallest edge weight is small. Combined with recent work on MCM, these two reductions substantially improve upon the state-of-the-art of dynamic MWM algorithms.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8194Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8195
We initiate a line of investigation into biological neural networks from an algorithmic perspective. We develop a simplified but biologically plausible model for distributed computation in stochastic spiking neural networks and study tradeoffs between computation time and network complexity in this model. Our aim is to abstract real neural networks in a way that, while not capturing all interesting features, preserves high-level behavior and allows us to make biologically relevant conclusions.In this paper, we focus on the important 'winner-take-all' (WTA) problem, which is analogous to a neural leader election unit: a network consisting of $n$ input neurons and n corresponding output neurons must converge to a state in which a single output corresponding to a firing input (the 'winner') fires, while all other outputs remain silent. Neural circuits for WTA rely on inhibitory neurons, which suppress the activity of competing outputs and drive the network towards a converged state with a single firing winner. We attempt to understand how the number of inhibitors used affects network convergence time.We show that it is possible to significantly outperform naive WTA constructions through a more refined use of inhibition, solving the problem in O(\theta) rounds in expectation with just O(\log^{1/\theta} n) inhibitors for any \theta. An alternative construction gives convergence in O(\log^{1/\theta} n) rounds with O(\theta) inhibitors. We complement these upper bounds with our main technical contribution, a nearly matching lower bound for networks using \ge \log \log n inhibitors. Our lower bound uses familiar indistinguishability and locality arguments from distributed computing theory applied to the neural setting. It lets us derive a number of interesting conclusions about the structure of any network solving WTA with good probability, and the use of randomness and inhibition within such a network.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8195Real Stability Testing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8196
We give a strongly polynomial time algorithm which determines whether or not a bivariate polynomial is real stable. As a corollary, this implies an algorithm for testing whether a given linear transformation on univariate polynomials preserves real-rootedness. The proof exploits properties of hyperbolic polynomials to reduce real stability testing to testing nonnegativity of a finite number of polynomials on an interval.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8196Separators in Region Intersection Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8197
For undirected graphs G=(V,E) and G_0=(V_0,E_0), say that G is a region intersection graph over G_0 if there is a family of connected subsets {R_u \subseteq V_0 : u \in V} of G_0 such that {u,v} \in E \iff R_u \cap R_v \neq \emptyset.We show if G_0 excludes the complete graph K_h as a minor for some h \geq 1, then every region intersection graph G over G_0 with m edges has a balanced separator with at most c_h \sqrt{m} nodes, where c_h is a constant depending only on h. If G additionally has uniformly bounded vertex degrees, then such a separator is found by spectral partitioning.A string graph is the intersection graph of continuous arcs in the plane. String graphs are precisely region intersection graphs over planar graphs. Thus the preceding result implies that every string graph with m edges has a balanced separator of size O(\sqrt{m}). This bound is optimal, as it generalizes the planar separator theorem. It confirms a conjecture of Fox and Pach (2010), and improves over the O(\sqrt{m} \log m) bound of Matousek (2013).Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8197SOS Is Not Obviously Automatizable, Even Approximately
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8198
Suppose we want to minimize a polynomial p(x) = p(x_1,...,x_n), subject to some polynomial constraints q_1(x),...,q_m(x) >_ 0, using the Sum-of-Squares (SOS) SDP hierarachy. Assume we are in the "explicitly bounded" ("Archimedean") case where the constraints include x_i^2 <_ 1 for all 1 <_ i <_ n. It is often stated that the degree-d version of the SOS hierarchy can be solved, tohigh accuracy, in time n^O(d). Indeed, I myself have stated this in several previous works.The point of this note is to state (or remind the reader) that this is not obviously true. The difficulty comes not from the "r" in the Ellipsoid Algorithm, but from the "R"; a priori, we only know an exponential upper bound on the number of bits needed to write down the SOS solution. An explicit example is given of a degree-2 SOS program illustrating the difficulty.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8198Hierarchical Functional Encryption
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8199
Functional encryption provides fine-grained access control for encrypted data, allowing each user to learn only specific functions of the encrypted data. We study the notion of hierarchical functional encryption, which augments functional encryption with delegation capabilities, offering significantly more expressive access control.We present a generic transformation that converts any general-purpose public-key functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing that the existence of functional encryption is equivalent to that of its hierarchical generalization.Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various trade-offs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of public-key encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8199Simultaneously Load Balancing for Every p-norm, With Reassignments
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8200
This paper investigates the task of load balancing where the objective function is to minimize the p-norm of loads, for p\geq 1, in both static and incremental settings. We consider two closely related load balancing problems. In the bipartite matching problem we are given a bipartite graph G=(C\cup S, E) and the goal is to assign each client c\in C to a server s\in S so that the p-norm of assignment loads on S is minimized.In the graph orientation problem the goal is to orient (direct) the edges of a given undirected graph while minimizing the p-norm of the out-degrees. The graph orientation problem is a special case of the bipartite matching problem, but less complex, which leads to simpler algorithms.For the graph orientation problem we show that the celebrated Chiba-Nishizeki peeling algorithm provides a simple linear time load balancing scheme whose output is an orientation that is 2-competitive, in a p-norm sense, for all p\geq 1. For the bipartite matching problem we first provide an offline algorithm that computes an optimal assignment. We then extend this solution to the online bipartite matching problem with reassignments, where vertices from C arrive in an online fashion together with their corresponding edges, and we are allowed to reassign an amortized O(1) vertices from C each time a new vertex arrives. In this online scenario we show how to maintain a single assignment that is 8-competitive, in a p-norm sense, for all p\geq 1.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8200Very Simple and Efficient Byzantine Agreement
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8201
We present a very simple, cryptographic, binary Byzantine-agreement protocol that, with n >= 3t+1 >= 3 players, at most t of which are malicious, halts in expected 9 rounds.Fri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8201Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8202
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 24 Nov 2017 12:11:43 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8202Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8056
Front Matter, Table of Contents, Preface, Conference OrganizationWed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8056Dividing Splittable Goods Evenly and With Limited Fragmentation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8057
A splittable good provided in n pieces shall be divided as evenly as possible among m agents, where every agent can take shares of at most F pieces. We call F the fragmentation. For F=1 we can solve the max-min and min-max problems in linear time. The case F=2 has neat formulations and structural characterizations in terms of weighted graphs. Here we focus on perfectly balanced solutions. While the problem is strongly NP-hard in general, it can be solved in linear time if m>=n-1, and a solution always exists in this case. Moreover, case F=2 is fixed-parameter tractable in the parameter 2m-n. The results also give rise to various open problems. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8057Binary Search in Graphs Revisited
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8058
In the classical binary search in a path the aim is to detect an unknown target by asking as few queries as possible, where each query reveals the direction to the target. This binary search algorithm has been recently extended by [Emamjomeh-Zadeh et al., STOC, 2016] to the problem of detecting a target in an arbitrary graph. Similarly to the classical case in the path, the algorithm of Emamjomeh-Zadeh et al. maintains a candidates’ set for the target, while each query asks an appropriately chosen vertex– the "median"–which minimises a potential \Phi among the vertices of the candidates' set. In this paper we address three open questions posed by Emamjomeh-Zadeh et al., namely (a) detecting a target when the query response is a direction to an approximately shortest path to the target, (b) detecting a target when querying a vertex that is an approximate median of the current candidates' set (instead of an exact one), and (c) detecting multiple targets, for which to the best of our knowledge no progress has been made so far. We resolve questions (a) and (b) by providing appropriate upper and lower bounds, as well as a new potential Γ that guarantees efficient target detection even by querying an approximate median each time. With respect to (c), we initiate a systematic study for detecting two targets in graphs and we identify sufficient conditions on the queries that allow for strong (linear) lower bounds and strong (polylogarithmic) upper bounds for the number of queries. All of our positive results can be derived using our new potential \Gamma that allows querying approximate medians.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8058Counting Problems for Parikh Images
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8059
Given finite-state automata (or context-free grammars) A,B over the same alphabet and a Parikh vector p, we study the complexity of deciding whether the number of words in the language of A with Parikh image p is greater than the number of such words in the language of B. Recently, this problem turned out to be tightly related to the cost problem for weighted Markov chains. We classify the complexity according to whether A and B are deterministic, the size of the alphabet, and the encoding of p (binary or unary).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8059Structured Connectivity Augmentation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8060
We initiate the algorithmic study of the following "structured augmentation" question: is it possible to increase the connectivity of a given graph G by superposing it with another given graph H? More precisely, graph F is the superposition of G and H with respect to injective mapping \phi:V(H)->V(G) if every edge uv of F is either an edge of G, or \phi^{-1}(u)\phi^{-1}(v) is an edge of H. Thus F contains both G and H as subgraphs, and the edge set of F is the union of the edge sets of G and \phi(H). We consider the following optimization problem. Given graphs G, H, and a weight function \omega assigning non-negative weights to pairs of vertices of V(G), the task is to find \phi of minimum weight \omega(\phi)=\sum_{xy\in E(H)}\omega(\phi(x)\phi(y)) such that the edge connectivity of the superposition F of G and H with respect to \phi is higher than the edge connectivity of G. Our main result is the following ``dichotomy'' complexity classification. We say that a class of graphs C has bounded vertex-cover number, if there is a constant t depending on C only such that the vertex-cover number of every graph from C does not exceed t. We show that for every class of graphs C with bounded vertex-cover number, the problems of superposing into a connected graph F and to 2-edge connected graph F, are solvable in polynomial time when H\in C. On the other hand, for any hereditary class C with unbounded vertex-cover number, both problems are NP-hard when H\in C. For the unweighted variants of structured augmentation problems, i.e. the problems where the task is to identify whether there is a superposition of graphs of required connectivity, we provide necessary and sufficient combinatorial conditions on the existence of such superpositions. These conditions imply polynomial time algorithms solving the unweighted variants of the problems. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8060On Multidimensional and Monotone k-SUM
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8061
The well-known k-SUM conjecture is that integer k-SUM requires time Omega(n^{\ceil{k/2}-o(1)}). Recent work has studied multidimensional k-SUM in F_p^d, where the best known algorithm takes time \tilde O(n^{\ceil{k/2}}). Bhattacharyya et al. [ICS 2011] proved a min(2^{\Omega(d)},n^{\Omega(k)}) lower bound for k-SUM in F_p^d under the Exponential Time Hypothesis. We give a more refined lower bound under the standard k-SUM conjecture: for sufficiently large p, k-SUM in F_p^d requires time Omega(n^{k/2-o(1)}) if k is even, and Omega(n^{\ceil{k/2}-2k(log k)/(log p)-o(1)}) if k is odd.For a special case of the multidimensional problem, bounded monotone d-dimensional 3SUM, Chan and Lewenstein [STOC 2015] gave a surprising \tilde O(n^{2-2/(d+13)}) algorithm using additive combinatorics. We show this algorithm is essentially optimal. To be more precise, bounded monotone d-dimensional 3SUM requires time Omega(n^{2-\frac{4}{d}-o(1)}) under the standard 3SUM conjecture, and time Omega(n^{2-\frac{2}{d}-o(1)}) under the so-called strong 3SUM conjecture. Thus, even though one might hope to further exploit the structural advantage of monotonicity, no substantial improvements beyond those obtained by Chan and Lewenstein are possible for bounded monotone d-dimensional 3SUM. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8061The Complexity of Boolean Surjective General-Valued CSPs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8062
Valued constraint satisfaction problems (VCSPs) are discrete optimisation problems with the objective function given as a sum of fixed-arity functions; the values are rational numbers or infinity.In Boolean surjective VCSPs variables take on labels from D={0,1} and an optimal assignment is required to use both labels from D. A classic example is the global min-cut problem in graphs. Building on the work of Uppman, we establish a dichotomy theorem and thus give a complete complexity classification of Boolean surjective VCSPs. The newly discovered tractable case has an interesting structure related to projections of downsets and upsets. Our work generalises the dichotomy for {0,infinity}-valued constraint languages corresponding to CSPs) obtained by Creignou and Hebrard, and the dichotomy for {0,1}-valued constraint languages (corresponding to Min-CSPs) obtained by Uppman.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8062New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8063
The Minimum Circuit Size Problem (MCSP) and a related problem (MKTP) that deals with time-bounded Kolmogorov complexity are prominent candidates for NP-intermediate status. We show that, under very modest cryptographic assumptions (such as the existence of one-way functions), the problem of approximating the minimum circuit size (or time-bounded Kolmogorov complexity) within a factor of n^{1 - o(1)} is indeed NP-intermediate. To the best of our knowledge, these problems are the first natural NP-intermediate problems under the existence of an arbitrary one-way function.We also prove that MKTP is hard for the complexity class DET undernon-uniform NC^0 reductions. This is surprising, since prior work on MCSP and MKTP had highlighted weaknesses of "local" reductions such as NC^0 reductions. We exploit this local reduction to obtain several new consequences:* MKTP is not in AC^0[p].* Circuit size lower bounds are equivalent to hardness of a relativized version MKTP^A of MKTP under a class of uniform AC^0 reductions, for a large class of sets A.* Hardness of MCSP^A implies hardness of MKTP^A for a wide class ofsets A. This is the first result directly relating the complexity ofMCSP^A and MKTP^A, for any A.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8063Emptiness Problems for Integer Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8064
We study the computational complexity of emptiness problems for circuits over sets of natural numbers with the operations union, intersection, complement, addition, and multiplication. For most settings of allowed operations we precisely characterize the complexity in terms of completeness for classes like NL, NP, and PSPACE. The case where intersection, addition, and multiplication is allowed turns out to be equivalent to the complement of polynomial identity testing (PIT).Our results imply the following improvements and insights on problems studied in earlier papers. We improve the bounds for the membership problem MC(\cup,\cap,Ż,+,×) studied by McKenzie and Wagner 2007 and for the equivalence problem EQ(\cup,\cap,Ż,+,×) studied by Glaßer et al. 2010. Moreover, it turns out that the following problems are equivalent to PIT, which shows that the challenge to improve their bounds is just a reformulation of a major open problem in algebraic computing complexity:1. membership problem MC(\cap,+,×) studied by McKenzie and Wagner 20072. integer membership problems MC_Z(+,×), MC_Z(\cap,+,×) studied by Travers 20063. equivalence problem EQ(+,×) studied by Glaßer et al. 2010Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8064One-Dimensional Logic over Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8065
A one-dimensional fragment of first-order logic is obtained by restricting quantification to blocks of existential quantifiers that leave at most one variable free. This fragment contains two-variable logic, and it is known that over words both formalisms have the same complexity and expressive power. Here we investigate the one-dimensional fragment over trees. We consider unranked unordered trees accessible by one or both of the descendant and child relations, as well as ordered trees equipped additionally with sibling relations. We show that over unordered trees the satisfiability problem is ExpSpace-complete when only the descendant relation is available and 2ExpTime-complete with both the descendant and child or with only the child relation. Over ordered trees the problem remains 2ExpTime-complete. Regarding expressivity, we show that over ordered trees and over unordered trees accessible by both the descendant and child the one-dimensional fragment is equivalent to the two-variable fragment with counting quantifiers.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8065Better Complexity Bounds for Cost Register Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8066
Cost register automata (CRAs) are one-way finite automata whose transitions have the side effect that a register is set to the result of applying a state-dependent semiring operation to a pair of registers. Here it is shown that CRAs over the tropical semiring (N U {infinity},\min,+) can simulate polynomial time computation, proving along the way that a naturally defined width-k circuit value problem over the tropical semiring is P-complete. Then the copyless variant of the CRA, requiring that semiring operations be applied to distinct registers, is shown no more powerful than NC^1 when the semiring is (Z,+,x) or (Gamma^*,max,concat). This relates questions left open in recent work on the complexity of CRA-computable functions to long-standing class separation conjectures in complexity theory, such as NC versus P and NC^1 versus GapNC^1.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8066Timed Network Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8067
Network games are widely used as a model for selfish resource-allocation problems. In the classical model, each player selects a path connecting her source and target vertex. The cost of traversing an edge depends on the number of players that traverse it. Thus, it abstracts the fact that different users may use a resource at different times and for different durations, which plays an important role in defining the costs of the users in reality. For example, when transmitting packets in a communication network, routing traffic in a road network, or processing a task in a production system, the traversal of the network involves an inherent delay, and so sharing and congestion of resources crucially depends on time.We study timed network games, which add a time component to network games. Each vertex v in the network is associated with a cost function, mapping the load on v to the price that a player pays for staying in v for one time unit with this load. In addition, each edge has a guard, describing time intervals in which the edge can be traversed, forcing the players to spend time on vertices. Unlike earlier work that add a time component to network games, the time in our model is continuous and cannot be discretized. In particular, players have uncountably many strategies, and a game may have uncountably many pure Nash equilibria.We study properties of timed network games with cost-sharing or congestion cost functions: their stability, equilibrium inefficiency, and complexity. In particular, we show that the answer to the question whether we can restrict attention to boundary strategies, namely ones in which edges are traversed only at the boundaries of guards, is mixed.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8067A Characterisation of Pi^0_2 Regular Tree Languages
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8068
We show an algorithm that for a given regular tree language L decides if L is in Pi^0_2, that is if L belongs to the second level of Borel Hierarchy. Moreover, if L is in Pi^0_2, then we construct a weak alternating automaton of index (0, 2) which recognises L. We also prove that for a given language L, L is recognisable by a weak alternating (1, 3)-automaton if and only if it is recognisable by a weak non-deterministic (1, 3)-automaton.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8068Efficient Identity Testing and Polynomial Factorization in Nonassociative Free Rings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8069
In this paper we study arithmetic computations in the nonassociative, and noncommutative free polynomial ring F{X}. Prior to this work, nonassociative arithmetic computation was considered by Hrubes, Wigderson, and Yehudayoff, and they showed lower bounds and proved completeness results. We consider Polynomial Identity Testing and Polynomial Factorization in F{X} and show the following results.1. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give a deterministic polynomial algorithm to decide if f is identically zero. Our result is obtained by a suitable adaptation of the PIT algorithm of Raz and Shpilka for noncommutative ABPs.2. Given an arithmetic circuit C computing a polynomial f in F{X} of degree d, we give an efficient deterministic algorithm to compute circuits for the irreducible factors of f in polynomial time when F is the field of rationals. Over finite fields of characteristic p,our algorithm runs in time polynomial in input size and p.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8069Computing the Maximum using (min, +) Formulas
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8070
We study computation by formulas over (min,+). We consider thecomputation of max{x_1,...,x_n} over N as a difference of(min,+) formulas, and show that size n + n \log n is sufficientand necessary. Our proof also shows that any (min,+) formulacomputing the minimum of all sums of n-1 out of n variables musthave n \log n leaves; this too is tight. Our proofs use acomplexity measure for (min,+) functions based on minterm-likebehaviour and on the entropy of an associated graph.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8070Time Complexity of Constraint Satisfaction via Universal Algebra
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8071
The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in subexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1, where n denotes the number of variables. Problems like k-SAT can be viewed as special cases of the constraint satisfaction problem (CSP), which is the problem of determining whether a set of constraints is satisfiable. In this paper we study the worst-case time complexity of NP-complete CSPs. Our main interest is in the CSP problem parameterized by a constraint language Gamma (CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is believed that CSP(Gamma) is either tractable or NP-complete, and the algebraic CSP dichotomy conjecture gives a sharp delineation of these two classes based on algebraic properties of constraint languages. Under this conjecture and the ETH, we first rule out the existence of subexponential algorithms for finite domain NP-complete CSP(Gamma) problems. This result also extends to certain infinite-domain CSPs and structurally restricted CSP(Gamma) problems. We then begin a study of the complexity of NP-complete CSPs where one is allowed to arbitrarily restrict the values of individual variables, which is a very well-studied subclass of CSPs. For such CSPs with finite domain D, we identify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if CSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is solvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a lower bound for all CSPs of this particular kind. We also prove that the complexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is false. This implies, for instance, that for every c>1 there exists a finite-domain Gamma such that CSP(Gamma) is NP complete and solvable in O(c^n) time.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8071Parameterized Algorithms for Partitioning Graphs into Highly Connected Clusters
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8072
Clustering is a well-known and important problem with numerous applications. The graph-based model is one of the typical cluster models. In the graph model generally clusters are defined as cliques. However, such approach might be too restrictive as in some applications, not all objects from the same cluster must be connected. That is why different types of cliques relaxations often considered as clusters.In our work, we consider a problem of partitioning graph into clusters and a problem of isolating cluster of a special type where by cluster we mean highly connected subgraph. Initially, such clusterization was proposed by Hartuv and Shamir. And their HCS clustering algorithm was extensively applied in practice. It was used to cluster cDNA fingerprints, to find complexes in protein-protein interaction data, to group protein sequences hierarchically into superfamily and family clusters, to find families of regulatory RNA structures. The HCS algorithm partitions graph in highly connected subgraphs. However, it is achieved by deletion of not necessarily the minimum number of edges. In our work, we try to minimize the number of edge deletions. We consider problems from the parameterized point of view where the main parameter is a number of allowed edge deletions. The presented algorithms significantly improve previous known running times for the Highly Connected Deletion (improved from \cOs\left(81^k\right) to \cOs\left(3^k\right)), Isolated Highly Connected Subgraph (from \cOs(4^k) to \cOs\left(k^{\cO\left(k^{\sfrac{2}{3}}\right)}\right) ), Seeded Highly Connected Edge Deletion (from \cOs\left(16^{k^{\sfrac{3}{4}}}\right) to \cOs\left(k^{\sqrt{k}}\right)) problems. Furthermore, we present a subexponential algorithm for Highly Connected Deletion problem if the number of clusters is bounded. Overall our work contains three subexponential algorithms which is unusual as very recently there were known very few problems admitting subexponential algorithms.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8072Membership Problem in GL(2, Z) Extended by Singular Matrices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8073
We consider the membership problem for matrix semigroups, which is the problem to decide whether a matrix belongs to a given finitely generated matrix semigroup. In general, the decidability and complexity of this problem for two-dimensional matrix semigroups remains open. Recently there was a significant progress with this open problem by showing that the membership is decidable for 2x2 nonsingular integer matrices. In this paper we focus on the membership for singular integer matrices and prove that this problem is decidable for 2x2 integer matrices whose determinants are equal to 0, 1, -1 (i.e. for matrices from GL(2,Z) and any singular matrices). Our algorithm relies on a translation of numerical problems on matrices into combinatorial problems on words and conversion of the membership problem into decision problem on regular languages.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8073Recognizing Graphs Close to Bipartite Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8074
We continue research into a well-studied family of problems that ask if the vertices of a graph can be partitioned into sets A and B, where A is an independent set and B induces a graph from some specified graph class G. We let G be the class of k-degenerate graphs. The problem is known to be polynomial-time solvable if k=0 (bipartite graphs) and NP-complete if k=1 (near-bipartite graphs) even for graphs of diameter 4, as shown by Yang and Yuan, who also proved polynomial-time solvability for graphs of diameter 2. We show that recognizing near-bipartite graphs of diameter 3 is NP-complete resolving their open problem. To answer another open problem, we consider graphs of maximum degree D on n vertices. We show how to find A and B in O(n) time for k=1 and D=3, and in O(n^2) time for k >= 2 and D >= 4. These results also provide an algorithmic version of a result of Catlin [JCTB, 1979] and enable us to complete the complexity classification of another problem: finding a path in the vertex colouring reconfiguration graph between two given k-colourings of a graph of bounded maximum degree.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8074Clique-Width for Graph Classes Closed under Complementation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8075
Clique-width is an important graph parameter due to its algorithmic and structural properties. A graph class is hereditary if it can be characterized by a (not necessarily finite) set H of forbidden induced subgraphs. We initiate a systematic study into the boundedness of clique-width of hereditary graph classes closed under complementation. First, we extend the known classification for the |H|=1 case by classifying the boundedness of clique-width for every set H of self-complementary graphs. We then completely settle the |H|=2 case. In particular, we determine one new class of (H1, complement of H1)-free graphs of bounded clique-width (as a side effect, this leaves only six classes of (H1, H2)-free graphs, for which it is not known whether their clique-width is bounded).Once we have obtained the classification of the |H|=2 case, we research the effect of forbidding self-complementary graphs on the boundedness of clique-width. Surprisingly, we show that for a set F of self-complementary graphs on at least five vertices, the classification of the boundedness of clique-width for ({H1, complement of H1} + F)-free graphs coincides with the one for the |H|=2 case if and only if F does not include the bull (the only non-empty self-complementary graphs on fewer than five vertices are P_1 and P_4, and P_4-free graphs have clique-width at most 2).Finally, we discuss the consequences of our results for COLOURING.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8075Satisfiable Tseitin Formulas Are Hard for Nondeterministic Read-Once Branching Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8076
We consider satisfiable Tseitin formulas TS_{G,c} based on d-regular expanders G with the absolute value of the second largest eigenvalue less than d/3. We prove that any nondeterministic read-once branching program (1-NBP) representing TS_{G,c} has size 2^{\Omega(n)}, where n is the number of vertices in G. It extends the recent result by Itsykson at el. [STACS 2017] from OBDD to 1-NBP.On the other hand it is easy to see that TS_{G,c} can be represented as a read-2 branching program (2-BP) of size O(n), as the negation of a nondeterministic read-once branching program (1-coNBP) of size O(n) and as a CNF formula of size O(n). Thus TS_{G,c} gives the best possible separations (up to a constant in the exponent) between1-NBP and 2-BP, 1-NBP and 1-coNBP and between 1-NBP and CNF.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8076Combinatorial Properties and Recognition of Unit Square Visibility Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8077
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8077Grammars for Indentation-Sensitive Parsing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8078
Adams' extension of parsing expression grammars enables specifyingindentation sensitivity using two non-standard grammar constructs - indentation by a binary relation and alignment. This paper is a theoretical study of Adams' grammars. It proposes astep-by-step transformation of well-formed Adams' grammars forelimination of the alignment construct from the grammar. The idea that alignment could be avoided was suggested by Adams but no process for achieving this aim has been described before.This paper also establishes general conditions that binaryrelations used in indentation constructs must satisfy in order to enable efficient parsing.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8078The Complexity of Quantified Constraints Using the Algebraic Formulation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8079
Let A be an idempotent algebra on a finite domain. We combine results of Chen, Zhuk and Carvalho et al. to argue that if A satisfies the polynomially generated powers property (PGP), then QCSP(Inv(A)) is in NP. We then use the result of Zhuk to prove a converse, that if Inv(A) satisfies the exponentially generated powers property (EGP), then QCSP(Inv(A)) is co-NP-hard. Since Zhuk proved that only PGP and EGP are possible, we derive a full dichotomy for the QCSP, justifying the moral correctness of what we term the Chen Conjecture.We examine in closer detail the situation for domains of size three. Over any finite domain, the only type of PGP that can occur is switchability. Switchability was introduced by Chen as a generalisation of the already-known Collapsibility. For three-element domain algebras A that are Switchable, we prove that for every finite subset Delta of Inv(A), Pol(Delta) is Collapsible. The significance of this is that, for QCSP on finite structures (over three-element domain), all QCSP tractability explained by Switchability is already explained by Collapsibility.Finally, we present a three-element domain complexity classification vignette, using known as well as derived results. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8079Faster Algorithms for Mean-Payoff Parity Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8080
Graph games provide the foundation for modeling and synthesis of reactive processes. Such games are played over graphs where the vertices are controlled by two adversarial players. We consider graph games where the objective of the first player is the conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a mean-payoff condition). There are two variants of the problem, namely, the threshold problem where the quantitative goal is to ensure that the mean-payoff value is above a threshold, and the value problem where the quantitative goal is to ensure the optimal mean-payoff value; in both cases ensuring the qualitative parity objective. The previous best-known algorithms for game graphs with n vertices, m edges,parity objectives with d priorities, and maximal absolute reward value W for mean-payoff objectives, are as follows: O(n^(d+1)·m·W) for the threshold problem, and O(n^(d+2)·m·W) for the value problem. Our main contributions are faster algorithms, and the running times of our algorithms are as follows: O(n^(d-1)·m·W) for the threshold problem, and O(n^d·m·W·log(n·W)) for the value problem. For mean-payoff parity objectives with two priorities, our algorithms match the best-known bounds of the algorithms for mean-payoff games (without conjunction with parity objectives). Our results are relevant in synthesis of reactive systems with both functional requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8080Two-Planar Graphs Are Quasiplanar
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8081
It is shown that every 2-planar graph is quasiplanar, that is, if a simple graph admits a drawing in the plane such that every edge is crossed at most twice, then it also admits a drawing in which no three edges pairwise cross. We further show that quasiplanarity is witnessed by a simple topological drawing, that is, any two edges cross at most once and adjacent edges do not cross.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8081The Complexity of SORE-definability Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8082
Single occurrence regular expressions (SORE) are a special kind of deterministic regular expressions, which are extensively used in the schema languages DTD and XSD for XML documents. In this paper, with motivations from the simplification of XML schemas, we consider the SORE-definability problem: Given a regular expression, decide whether it has an equivalent SORE. We investigate extensively the complexity of the SORE-definability problem: We consider both (standard) regular expressions and regular expressions with counting, and distinguish between the alphabets of size at least two and unary alphabets. In all cases, we obtain tight complexity bounds. In addition, we consider another variant of this problem, the bounded SORE-definability problem, which is to decide, given a regular expression E and a number M (encoded in unary or binary), whether there is an SORE, which is equivalent to E on the set of words of length at most M. We show that in several cases, there is an exponential decrease in the complexity when switching from the SORE-definability problem to its bounded variant.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8082Another Characterization of the Higher K-Trivials
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8083
In algorithmic randomness, the class of K-trivial sets has proved itself to be remarkable, due to its numerous different characterizations. We pursue in this paper some work already initiated on K-trivials in the context of higher randomness. In particular we give here another characterization of the non hyperarithmetic higher K-trivial sets.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8083Communication Complexity of Pairs of Graph Families with Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8084
Given a graph G and a pair (\mathcal{F}_1,\mathcal{F}_2) of graph families, the function {\sf GDISJ}_{G,{\cal F}_1,{\cal F}_2} takes as input, two induced subgraphs G_1 and G_2 of G, such that G_1 \in \mathcal{F}_1 and G_2 \in \mathcal{F}_2 and returns 1 if V(G_1)\cap V(G_2)=\emptyset and 0 otherwise. We study the communication complexity of this problem in the two-party model. In particular, we look at pairs of hereditary graph families. We show that the communication complexity of this function, when the two graph families are hereditary, is sublinear if and only if there are finitely many graphs in the intersection of these two families. Then, using concepts from parameterized complexity, we obtain nuanced upper bounds on the communication complexity of GDISJ_G,\cal F_1,\cal F_2. A concept related to communication protocols is that of a (\mathcal{F}_1,\mathcal{F}_2)-separating family of a graph G. A collection \mathcal{F} of subsets of V(G) is called a (\mathcal{F}_1,\mathcal{F}_2)-separating family} for G, if for any two vertex disjoint induced subgraphs G_1\in \mathcal{F}_1,G_2\in \mathcal{F}_2, there is a set F \in \mathcal{F} with V(G_1) \subseteq F and V(G_2) \cap F = \emptyset. Given a graph G on n vertices, for any pair (\mathcal{F}_1,\mathcal{F}_2) of hereditary graph families with sublinear communication complexity for GDISJ_G,\cal F_1,\cal F_2, we give an enumeration algorithm that finds a subexponential sized (\mathcal{F}_1,\mathcal{F}_2)-separating family. In fact, we give an enumeration algorithm that finds a 2^{o(k)}n^{\Oh(1)} sized (\mathcal{F}_1,\mathcal{F}_2)-separating family; where k denotes the size of a minimum sized set S of vertices such that V(G)\setminus S has a bipartition (V_1,V_2) with G[V_1] \in {\cal F}_1 and G[V_2]\in {\cal F}_2. We exhibit a wide range of applications for these separating families, to obtain combinatorial bounds, enumeration algorithms as well as exact and FPT algorithms for several problems. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8084Selecting Nodes and Buying Links to Maximize the Information Diffusion in a Network
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8085
The Independent Cascade Model (ICM) is a widely studied model that aims to capture the dynamics of the information diffusion in social networks and in general complex networks. In this model, we can distinguish between active nodes which spread the information and inactive ones. The process starts from a set of initially active nodes called seeds. Recursively, currently active nodes can activate their neighbours according to a probability distribution on the set of edges. After a certain number of these recursive cycles, a large number of nodes might become active. The process terminates when no further node gets activated.Starting from the work of Domingos and Richardson [Domingos et al. 2001], several studies have been conducted with the aim of shaping a given diffusion process so as to maximize the number of activated nodes at the end of the process. One of the most studied problems has been formalized by Kempe et al. and consists in finding a set of initial seeds that maximizes the expected number of active nodes under a budget constraint [Kempe et al. 2003].In this paper we study a generalization of the problem of Kempe et al. in which we are allowed to spend part of the budget to create new edges incident to the seeds. That is, the budget can be spent to buy seeds or edges according to a cost function. The problem does not admin a PTAS, unless P=NP. We propose two approximation algorithms: the former one gives an approximation ratio that depends on the edge costs and increases when these costs are high; the latter algorithm gives a constant approximation guarantee which is greater than that of the first algorithm when the edge costs can be small.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8085Kernelization of the Subset General Position Problem in Geometry
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8086
In this paper, we consider variants of the Geometric Subset General Position problem. In defining this problem, a geometric subsystem is specified, like a subsystem of lines, hyperplanes or spheres. The input of the problem is a set of n points in \mathbb{R}^d and a positive integer k. The objective is to find a subset of at least k input points such that this subset is in general position with respect to the specified subsystem. For example, a set of points is ingeneral position with respect to a subsystem of hyperplanes in \mathbb{R}^d if no d+1 points lie on the samehyperplane. In this paper, we study the Hyperplane Subset General Position problem under two parameterizations.When parameterized by k then we exhibit a polynomial kernelization for the problem. When parameterized by h=n-k,or the dual parameter, then we exhibit polynomial kernels which are also tight, under standard complexity theoreticassumptions.We can also exhibit similar kernelization results for d-Polynomial Subset General Position, where a vector space of polynomials of degree at most d are specified as the underlying subsystem such that the size of the basis for this vector space is b. The objective is to find a set of at least k input points, or in the dual delete at most h = n-k points, such that no b+1 points lie on the same polynomial. Notice that this is a generalization of many well-studied geometric variants of the Set Cover problem, such as Circle Subset General Position. We also study general projective variants of these problems. These problems are also related to other geometric problems like Subset Delaunay Triangulation problem.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8086Fractal Intersections and Products via Algorithmic Dimension
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8087
Algorithmic dimensions quantify the algorithmic information density of individual points and may be defined in terms of Kolmogorov complexity. This work uses these dimensions to bound the classical Hausdorff and packing dimensions of intersections and Cartesian products of fractals in Euclidean spaces. This approach shows that a known intersection formula for Borel sets holds for arbitrary sets, and it significantly simplifies the proof of a known product formula. Both of these formulas are prominent, fundamental results in fractal geometry that are taught in typical undergraduate courses on the subject.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8087K4-free Graphs as a Free Algebra
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8088
Graphs of treewidth at most two are the ones excluding the clique with four vertices as a minor. Equivalently, they are the graphs whose biconnected components are series-parallel.We turn those graphs into a free algebra, answering positively a question by Courcelle and Engelfriet, in the case of treewidth two.First we propose a syntax for denoting them: in addition to series and parallel compositions, it suffices to consider the neutral elements of those operations and a unary transpose operation. Then we give a finite equational presentation and we prove it complete: two terms from the syntax are congruent if and only if they denote the same graph.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8088A Formal Semantics of Influence in Bayesian Reasoning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8089
This paper proposes a formal definition of influence in Bayesian reasoning, based on the notions of state (as probability distribution), predicate, validity and conditioning. Our approach highlights how conditioning a joint entwined/entangled state with a predicate on one of its components has 'crossover' influence on the other components. We use the total variation metric on probabilitydistributions to quantitatively measure such influence. These insights are applied to give a rigorous explanation of the fundamental concept of d-separation in Bayesian networks.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8089The Power of Programs over Monoids in DA
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8090
The program-over-monoid model of computation originates with Barrington's proof that it captures the complexity class NC^1. Here we make progress in understanding the subtleties of the model. First, we identify a new tameness condition on a class of monoids that entails a natural characterization of the regular languages recognizable by programs over monoids from the class. Second, we prove that the class known as DA satisfies tameness and hence that the regular languages recognized by programs over monoids in DA are precisely those recognizable in the classical sense by morphisms from QDA. Third, we show by contrast that the well studied class of monoids called J is not tame and we exhibit a regular language, recognized by a program over a monoid from J, yet not recognizable classically by morphisms from the class QJ. Finally, we exhibit a program-length-based hierarchy within the class of languages recognized by programs over monoids from DA.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8090Approximation and Parameterized Algorithms for Geometric Independent Set with Shrinking
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8091
Consider the Maximum Weight Independent Set problem for rectangles: given a family of weighted axis-parallel rectangles in the plane, find a maximum-weight subset of non-overlapping rectangles. The problem is notoriously hard both in the approximation and in the parameterized setting. The best known polynomial-time approximation algorithms achieve super-constant approximation ratios [Chalermsook & Chuzhoy, Proc. SODA 2009; Chan & Har-Peled, Discrete & Comp. Geometry, 2012], even though there is a (1+epsilon)-approximation running in quasi-polynomial time [Adamaszek & Wiese, Proc. FOCS 2013; Chuzhoy & Ene, Proc. FOCS 2016]. When parameterized by the target size of the solution, the problem is W[1]-hard even in the unweighted setting [Marx, ESA 2005].To achieve tractability, we study the following shrinking model: one is allowed to shrink each input rectangle by a multiplicative factor 1-delta for some fixed delta > 0, but the performance is still compared against the optimal solution for the original, non-shrunk instance. We prove that in this regime, the problem admits an EPTAS with running time f(epsilon,delta) n^{O(1)}, and an FPT algorithm with running time f(k,delta) n^{O(1)}, in the setting where a maximum-weight solution of size at most k is to be computed. This improves and significantly simplifies a PTAS given earlier for this problem [Adamaszek, Chalermsook & Wiese, Proc. APPROX/RANDOM 2015], and provides the first parameterized results for the shrinking model. Furthermore, we explore kernelization in the shrinking model, by giving efficient kernelization procedures for several variants of the problem when the input rectangles are squares.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8091Hypercube LSH for Approximate near Neighbors
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8092
A celebrated technique for finding near neighbors for the angular distance involves using a set of random hyperplanes to partition the space into hash regions [Charikar, STOC 2002]. Experiments later showed that using a set of orthogonal hyperplanes, thereby partitioning the space into the Voronoi regions induced by a hypercube, leads to even better results [Terasawa and Tanaka, WADS 2007]. However, no theoretical explanation for this improvement was ever given, and it remained unclear how the resulting hypercube hash method scales in high dimensions.In this work, we provide explicit asymptotics for the collision probabilities when using hypercubes to partition the space. For instance, two near-orthogonal vectors are expected to collide with probability (1/pi)^d in dimension d, compared to (1/2)^d when using random hyperplanes. Vectors at angle pi/3 collide with probability (sqrt[3]/pi)^d, compared to (2/3)^d for random hyperplanes, and near-parallel vectors collide with similar asymptotic probabilities in both cases.For c-approximate nearest neighbor searching, this translates to a decrease in the exponent rho of locality-sensitive hashing (LSH) methods of a factor up to log2(pi) ~ 1.652 compared to hyperplane LSH. For c = 2, we obtain rho ~ 0.302 for hypercube LSH, improving upon the rho ~ 0.377 for hyperplane LSH. We further describe how to use hypercube LSH in practice, and we consider an example application in the area of lattice algorithms.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8092Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8093
The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is Theta(sqrt{n}) times of what random priority achieves. These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the worst-case analysis.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8093Regular Language Distance and Entropy
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8094
This paper addresses the problem of determining the distance between two regular languages. It will show how to expand Jaccard distance, which works on finite sets, to potentially-infinite regular languages.The entropy of a regular language plays a large role in the extension. Much of the paper is spent investigating the entropy of a regular language. This includes addressing issues that have required previous authors to rely on the upper limit of Shannon's traditional formulation of channel capacity, because its limit does not always exist. The paper also includes proposing a new limit based formulation for the entropy of a regular language and proves that formulation to both exist and be equivalent to Shannon's original formulation (when it exists). Additionally, the proposed formulation is shown to equal an analogous but formally quite different notion of topological entropy from Symbolic Dynamics -- consequently also showing Shannon's original formulation to beequivalent to topological entropy.Surprisingly, the natural Jaccard-like entropy distance is trivial in most cases. Instead, the entropy sum distance metric is suggested, and shown to be granular in certain situations.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8094Lossy Kernels for Hitting Subgraphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8095
In this paper, we study the Connected H-hitting Set and Dominating Set problems from the perspective of approximate kernelization, a framework recently introduced by Lokshtanov et al. [STOC 2017]. For the Connected H-hitting set problem, we obtain an \alpha-approximate kernel for every \alpha>1 and complement it with a lower bound for the natural weighted version. We then perform a refined analysis of the tradeoff between the approximation factor and kernel size for the Dominating Set problem on d-degenerate graphs and provide an interpolation of approximate kernels between the known d^2-approximate kernel of constant size and 1-approximate kernel of size k^{O(d^2)}.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8095Undecidable Problems for Probabilistic Network Programming
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8096
The software-defined networking language NetKAT is able to verify many useful properties of networks automatically via a PSPACE decision procedure for program equality. However, for its probabilistic extension ProbNetKAT, no such decision procedure is known. We show that several potentially useful properties of ProbNetKAT are in fact undecidable, including emptiness of support intersection and certain kinds of distribution bounds and program comparisons. We do so by embedding the Post Correspondence Problem in ProbNetKAT via direct product expressions, and by directly embedding probabilistic finite automata.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8096Does Looking Inside a Circuit Help?
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8097
The Black-Box Hypothesisstates that any property of Boolean functions decided efficiently (e.g., in BPP) with inputs represented by circuits can also be decided efficiently in the black-box setting, where an algorithm is given an oracle access to the input function and an upper bound on its circuit size. If this hypothesis is true, then P neq NP. We focus on the consequences of the hypothesis being false, showing that (under general conditions on the structure of a counterexample) it implies a non-trivial algorithm for CSAT. More specifically, we show that if there is a property F of boolean functions such that F has high sensitivity on some input function f of subexponential circuit complexity (which is a sufficient condition for F being a counterexample to the Black-Box Hypothesis), then CSAT is solvable by a subexponential-size circuit family. Moreover, if such a counterexample F is symmetric, then CSAT is in Ppoly. These results provide some evidence towards the conjecture (made in this paper) that the Black-Box Hypothesis is false if and only if CSAT is easy. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8097On the Expressive Power of Quasiperiodic SFT
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8098
In this paper we study the shifts, which are the shift-invariant and topologically closed sets of configurations over a finite alphabet in Z^d. The minimal shifts are those shifts in which all configurations contain exactly the same patterns. Two classes of shifts play a prominent role in symbolic dynamics, in language theory and in the theory of computability: the shifts of finite type (obtained by forbidding a finite number of finite patterns) and the effective shifts (obtained by forbidding a computably enumerable set of finite patterns). We prove that every effective minimal shift can be represented as a factor of a projective subdynamics on a minimal shift of finite type in a bigger (by 1) dimension. This result transfers to the class of minimal shifts a theorem by M.Hochman known for the class of all effective shifts and thus answers an open question by E. Jeandel. We prove a similar result for quasiperiodic shifts and also show that there exists a quasiperiodic shift of finite type for which Kolmogorov complexity of all patterns of size n\times n is \Omega(n). Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8098Fine-Grained Complexity of Rainbow Coloring and its Variants
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8099
Consider a graph G and an edge-coloring c_R:E(G) \rightarrow [k]. A rainbow path between u,v \in V(G) is a path P from u to v such that for all e,e' \in E(P), where e \neq e' we have c_R(e) \neq c_R(e'). In the Rainbow k-Coloring problem we are given a graph G, and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in V(G) there is a rainbow path between u and v in G. Several variants of Rainbow k-Coloring have been studied, two of which are defined as follows. The Subset Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G) \times V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all (u,v) \in S there is a rainbow path between u and v in G. The problem Steiner Rainbow k-Coloring takes as an input a graph G and a set S \subseteq V(G), and the objective is to decide if there exists c_R: E(G) \rightarrow [k] such that for all u,v \in S there is a rainbow path between u and v in G. In an attempt to resolve open problems posed by Kowalik et al. (ESA 2016), we obtain the following results. - For every k \geq 3, Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|E(G)|)}n^{O(1)}, unless ETH fails. - For every k \geq 3, Steiner Rainbow k-Coloring does not admit an algorithm running in time 2^{o(|S|^2)}n^{O(1)}, unless ETH fails. - Subset Rainbow k-Coloring admits an algorithm running in time 2^{\OO(|S|)}n^{O(1)}. This also implies an algorithm running in time 2^{o(|S|^2)}n^{O(1)} for Steiner Rainbow k-Coloring, which matches the lower bound we obtain. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8099Model Checking and Validity in Propositional and Modal Inclusion Logics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8100
Propositional and modal inclusion logic are formalisms that belong to the family of logics based on team semantics. This article investigates the model checking and validity problems of these logics. We identify complexity bounds for both problems, covering both lax and strict team semantics. By doing so, we come close to finalising the programme that ultimately aims to classify the complexities of the basic reasoning problems for modal and propositional dependence, independence, and inclusion logics.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8100The Complexity of Quantum Disjointness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8101
We introduce the communication problem QNDISJ, short for Quantum (Unique) Non-Disjointness, and study its complexity under different modes of communication complexity. The main motivation for the problem is that it is a candidate for the separation of the quantum communication complexity classes QMA and QCMA. The problem generalizes the Vector-in-Subspace and Non-Disjointness problems. We give tight bounds for the QMA, quantum, randomized communication complexities of the problem. We show polynomially related upper and lower bounds for the MA complexity. We also show an upper bound for QCMA protocols, and show that the bound is tight for a natural class of QCMA protocols for the problem. The latter lower bound is based on a geometric lemma, that states that every subset of the n-dimensional sphere of measure 2^-p must contain an ortho-normal set of points of size Omega(n/p).We also study a "small-spaces" version of the problem, and give upper and lower bounds for its randomized complexity that show that the QNDISJ problem is harder than Non-disjointness for randomized protocols. Interestingly, for quantum modes the complexity depends only on the dimension of the smaller space, whereas for classical modes the dimension of the larger space matters.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8101Small-Space LCE Data Structure with Constant-Time Queries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8102
The longest common extension (LCE) problem is to preprocess a given string w of length n so that the length of the longest common prefix between suffixes of w that start at any two given positions is answered quickly. In this paper, we present a data structure of O(z \tau^2 + \frac{n}{\tau}) words of space which answers LCE queries in O(1) time and can be built in O(n \log \sigma) time, where 1 \leq \tau \leq \sqrt{n} is a parameter, z is the size of the Lempel-Ziv 77 factorization of w and \sigma is the alphabet size. The proposed LCE data structure not access the input string w when answering queries, and thus w can be deleted after preprocessing. On top of this main result, we obtain further results using (variants of) our LCE data structure, which include the following:- For highly repetitive strings where the z\tau^2 term is dominated by \frac{n}{\tau}, we obtain a constant-time and sub-linear space LCE query data structure.- Even when the input string is not well compressible via Lempel-Ziv 77 factorization, we still can obtain a constant-time and sub-linear space LCE data structure for suitable \tau and for \sigma \leq 2^{o(\log n)}.- The time-space trade-off lower bounds for the LCE problem by Bille et al. [J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR, abs/1611.02891, 2016] do not apply in some cases with our LCE data structure.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8102Eilenberg Theorems for Free
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8103
Eilenberg-type correspondences, relating varieties of languages (e.g., of finite words, infinite words, or trees) to pseudovarieties of finite algebras, form the backbone of algebraic language theory. We show that they all arise from the same recipe: one models languages and the algebras recognizing them by monads on an algebraic category, and applies a Stone-type duality. Our main contribution is a variety theorem that covers e.g. Wilke's and Pin's work on infinity-languages, the variety theorem for cost functions of Daviaud, Kuperberg, and Pin, and unifies the two categorical approaches of Bojanczyk and of Adamek et al. In addition we derive new results, such as an extension of the local variety theorem of Gehrke, Grigorieff, and Pin from finite to infinite words.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8103The Shortest Identities for Max-Plus Automata with Two States
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8104
Max-plus automata are quantitative extensions of automata designed to associate an integer with every non-empty word. A pair of distinct words is said to be an identity for a class of max-plus automata if each of the automata in the class computes the same value on the two words. We give the shortest identities holding for the class of max-plus automata with two states. For this, we exhibit an interesting list of necessary conditions for an identity to hold. Moreover, this result provides a counter-example of a conjecture of Izhakian, concerning the minimality of certain identities.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8104Comparison of Max-Plus Automata and Joint Spectral Radius of Tropical Matrices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8105
Weighted automata over the tropical semiring Zmax are closely related to finitely generated semigroups of matrices over Zmax. In this paper, we use results in automata theory to study two quantities associated with sets of matrices: the joint spectral radius and the ultimate rank. We prove that these two quantities are not computable over the tropical semiring, i.e. there is no algorithm that takes as input a finite set of matrices S and provides as output the joint spectral radius (resp. the ultimate rank) of S. On the other hand, we prove that the joint spectral radius is nevertheless approximable and we exhibit restricted cases in which the joint spectral radius and the ultimate rank are computable. To reach this aim, we study the problem of comparing functions computed by weighted automata over the tropical semiring. This problem is known to be undecidable, and we prove that it remains undecidable in some specific subclasses of automata.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8105Parameterized Complexity of the List Coloring Reconfiguration Problem with Graph Parameters
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8106
Let G be a graph such that each vertex has its list of available colors, and assume that each list is a subset of the common set consisting of k colors. For two given list colorings of G, we study the problem of transforming one into the other by changing only one vertex color assignment at a time, while at all times maintaining a list coloring. This problem is known to be PSPACE-complete even for bounded bandwidth graphs and a fixed constant k. In this paper, we study the fixed-parameter tractability of the problem when parameterized by several graph parameters. We first give a fixed-parameter algorithm for the problem when parameterized by k and the modular-width of an input graph. We next give a fixed-parameter algorithm for the shortest variant which computes the length of a shortest transformation when parameterized by k and the size of a minimum vertex cover of an input graph. As corollaries, we show that the problem for cographs and the shortest variant for split graphs are fixed-parameter tractable even when only k is taken as a parameter. On the other hand, we prove that the problem is W[1]-hard when parameterized only by the size of a minimum vertex cover of an input graph.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8106Generalized Predecessor Existence Problems for Boolean Finite Dynamical Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8107
A Boolean Finite Synchronous Dynamical System (BFDS, for short) consists of a finite number of objects that each maintains a boolean state, where after individually receiving state assignments, the objects update their state with respect to object-specific time-independent boolean functions synchronously in discrete time steps.The present paper studies the computational complexity of determining, given a boolean finite synchronous dynamical system,a configuration, which is a boolean vector representing the statesof the objects, and a positive integer t, whether there exists another configuration from which the given configuration can be reached in t steps. It was previously shown that this problem, which we call the t-Predecessor Problem, is NP-complete even for t = 1if the update function of an object is either the conjunction ofarbitrary fan-in or the disjunction of arbitrary fan-in.This paper studies the computational complexity of the t-Predecessor Problem for a variety of sets of permissible update functions as well as for polynomially bounded t. It also studies the t-Garden-Of-Eden Problem, a variant of the t-Predecessor Problem that asks whether a configuration has a t-predecessor, which itself has no predecessor. The paper obtains complexity theoretical characterizations of all but one of these problems.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8107TC^0 Circuits for Algorithmic Problems in Nilpotent Groups
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8108
Recently, Macdonald et. al. showed that many algorithmic problems for finitely generated nilpotent groups including computation of normal forms, the subgroup membership problem, the conjugacy problem, and computation of subgroup presentations can be done in LOGSPACE. Here we follow their approach and show that all these problems are complete for the uniform circuit class TC^0 - uniformly for all r-generated nilpotent groups of class at most c for fixed r and c. Moreover, if we allow a certain binary representation of the inputs, then the word problem and computation of normal forms is still in uniform TC^0, while all the other problems we examine are shown to be TC^0-Turing reducible to the problem of computing greatest common divisors and expressing them as linear combinations.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8108Lower Bounds and PIT for Non-Commutative Arithmetic Circuits with Restricted Parse Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8109
We investigate the power of Non-commutative Arithmetic Circuits, which compute polynomials over the free non-commutative polynomial ring F, where variables do not commute. We consider circuits that are restricted in the ways in which they can compute monomials: this can be seen as restricting the families of parse trees that appear in the circuit. Such restrictions capture essentially all non-commutative circuit models for which lower bounds are known. We prove several results about such circuits.- We show explicit exponential lower bounds for circuits with up to an exponential number of parse trees, strengthening the work of Lagarde, Malod, and Perifel (ECCC 2016), who prove such a result for Unique Parse Tree (UPT) circuits which have a single parse tree.- We show explicit exponential lower bounds for circuits whose parse trees are rotations of a single tree. This simultaneously generalizes recent lower bounds of Limaye, Malod, and Srinivasan (Theory of Computing 2016) and the above lower bounds of Lagarde et al., which are known to be incomparable.- We make progress on a question of Nisan (STOC 1991) regarding separating the power of Algebraic Branching Programs (ABPs) and Formulas in the non-commutative setting by showing a tight lower bound of n^{Omega(log d)} for any UPT formula computing the product of d n*n matrices. When d <= log n, we can also prove superpolynomial lower bounds for formulas with up to 2^{o(d)} many parse trees (for computing the same polynomial). Improving this bound to allow for 2^{O(d)} trees would yield an unconditional separation between ABPs and Formulas.- We give deterministic white-box PIT algorithms for UPT circuits over any field (strengthening a result of Lagarde et al. (2016)) and also for sums of a constant number of UPT circuits with different parse trees.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8109An Improved FPT Algorithm for the Flip Distance Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8110
Given a set \cal P of points in the Euclidean plane and two triangulations of \cal P, the flip distance between these two triangulations is the minimum number of flips required to transform one triangulation into the other. The Parameterized Flip Distance problem is to decide if the flip distance between two given triangulations is equal to a given integer k. The previous best FPT algorithm runs in time O^*(k\cdot c^k) (c\leq 2\times 14^11), where each step has fourteen possible choices, and the length of the action sequence is bounded by 11k. By applying the backtracking strategy and analyzing the underlying property of the flip sequence, each step of our algorithm has only five possible choices. Based on an auxiliary graph G, we prove that the length of the action sequence for our algorithm is bounded by 2|G|. As a result, we present an FPT algorithm running in time O^*(k\cdot 32^k).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8110Making Metric Temporal Logic Rational
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8111
We study an extension of MTL in pointwise time with regular expression guarded modality Reg_I(re) where re is a rational expression over subformulae. We study the decidability and expressiveness of this extension (MTL+Ureg+Reg), called RegMTL, as well as its fragment SfrMTL where only star-free rational expressions are allowed. Using the technique of temporal projections, we show that RegMTL has decidable satisfiability by giving an equisatisfiable reduction to MTL. We also identify a subclass MITL+UReg of RegMTL for which our equisatisfiable reduction gives rise to formulae of MITL, yielding elementary decidability. As our second main result, we show a tight automaton-logic connection between SfrMTL and partially ordered (or very weak) 1-clock alternating timed automata. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8111Towards a Polynomial Kernel for Directed Feedback Vertex Set
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8112
In the Directed Feedback Vertex Set (DFVS) problem, the input isa directed graph D and an integer k. The objective is to determinewhether there exists a set of at most k vertices intersecting everydirected cycle of D. DFVS was shown to be fixed-parameter tractable when parameterized by solution size by Chen, Liu, Lu, O'Sullivan andRazgon [JACM 2008]; since then, the existence of a polynomial kernel for this problem has become one of the largest open problems in the area of parameterized algorithmics.In this paper, we study DFVS parameterized by the feedback vertexset number of the underlying undirected graph. We provide two main contributions: a polynomial kernel for this problem on general instances, and a linear kernel for the case where the input digraph is embeddable on a surface of bounded genus.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8112Monitor Logics for Quantitative Monitor Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8113
We introduce a new logic called Monitor Logic and show that it is expressively equivalent to Quantitative Monitor Automata.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8113The Equivalence, Unambiguity and Sequentiality Problems of Finitely Ambiguous Max-Plus Tree Automata are Decidable
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8114
We show that the equivalence, unambiguity and sequentiality problems are decidable for finitely ambiguous max-plus tree automata.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8114Weighted Operator Precedence Languages
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8115
In the last years renewed investigation of operator precedence languages (OPL) led to discover important properties thereof: OPL are closed with respect to all major operations, are characterized, besides the original grammar family, in terms of an automata family (OPA) and an MSO logic; furthermore they significantly generalize the well-known visibly pushdown languages (VPL). In another area of research, quantitative models of systems are also greatly in demand. In this paper, we lay the foundation to marry these two research fields. We introduce weighted operator precedence automata and show how they are both strict extensions of OPA and weighted visibly pushdown automata. We prove a Nivat-like result which shows that quantitative OPL can be described by unweighted OPA and very particular weighted OPA. In a Büchi-like theorem, we show that weighted OPA are expressively equivalent to a weighted MSO-logic for OPL.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8115The Power of Linear-Time Data Reduction for Maximum Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8116
Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8116ZX-Calculus: Cyclotomic Supplementarity and Incompleteness for Clifford+T Quantum Mechanics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8117
The ZX-Calculus is a powerful graphical language for quantum mechanics and quantum information processing. The completeness of the language - i.e. the ability to derive any true equation - is a crucial question. In the quest of a complete ZX-calculus, supplementarity has been recently proved to be necessary for quantum diagram reasoning (MFCS 2016). Roughly speaking, supplementarity consists in merging two subdiagrams when they are parameterized by antipodal angles.We introduce a generalised supplementarity - called cyclotomic supplementarity - which consists in merging n subdiagrams at once, when the n angles divide the circle into equal parts. We show that when n is an odd prime number, the cyclotomic supplementarity cannot be derived, leading to a countable family of new axioms for diagrammatic quantum reasoning.We exhibit another new simple axiom that cannot be derived from the existing rules of the ZX-Calculus, implying in particular the incompleteness of the language for the so-called Clifford+T quantum mechanics. We end up with a new axiomatisation of an extended ZX-Calculus, including an axiom schema for the cyclotomic supplementarity.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8117Variations on Inductive-Recursive Definitions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8118
Dybjer and Setzer introduced the definitional principle of inductive-recursively defined families - i.e. of families (U : Set, T : U -> D) such that the inductive definition of U may depend on the recursively defined T --- by defining a type DS D E of codes. Each c : DS D E defines a functor [c] : Fam D -> Fam E, and(U, T) = \mu [c] : Fam D is exhibited as the initial algebra of [c].This paper considers the composition of DS-definable functors: Given F : Fam C -> Fam D and G : Fam D -> Fam E, is G \circ F : Fam C -> Fam E DS-definable, if F and G are? We show that this is the case if and only if powers of families are DS-definable, which seems unlikely. To construct composition, we present two new systems UF and PN of codes for inductive-recursive definitions, with UF a subsytem of DS a subsystem of PN. Both UF and PN are closed under composition. Since PN defines a potentially larger class of functors, we show that there is a model where initial algebras of PN-functors exist by adapting Dybjer-Setzer's proof for DS.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8118Walrasian Pricing in Multi-Unit Auctions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8119
Multi-unit auctions are a paradigmatic model, where a seller brings multiple units of a good, while several buyers bring monetary endowments. It is well known that Walrasian equilibria do not always exist in this model, however compelling relaxations such as Walrasian envy-free pricing do. In this paper we design an optimal envy-free mechanism for multi-unit auctions with budgets. When the market is even mildly competitive, the approximation ratios of this mechanism are small constants for both the revenue and welfare objectives, and in fact for welfare the approximation converges to 1 as the market becomes fully competitive. We also give an impossibility theorem, showing that truthfulness requires discarding resources, and in particular, is incompatible with (Pareto) efficiency.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8119Strategy Complexity of Concurrent Safety Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8120
We consider two player, zero-sum, finite-state concurrent reachability games, played for an infinite number of rounds, where in every round, each player simultaneously and independently of the other players chooses an action, whereafter the successor state is determined by a probability distribution given by the current state and the chosen actions. Player 1 wins iff a designated goal state is eventually visited. We are interested in the complexity of stationary strategies measured by their patience, which is defined as the inverse of the smallest non-zero probability employed. Our main results are as follows: We show that: (i) the optimal bound on the patience of optimal and epsilon-optimal strategies, for both players is doubly exponential; and (ii) even in games with a single non-absorbing state exponential (in the number of actions) patience is necessary. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8120Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8121
Evolutionary graph theory studies the evolutionary dynamics in a population structure given as a connected graph. Each node of the graph represents an individual of the population, and edges determine how offspring are placed. We consider the classical birth-death Moran process where there are two types of individuals, namely, the residents with fitness 1 and mutants with fitness r. The fitness indicates the reproductive strength. The evolutionary dynamics happens as follows: in the initial step, in a population of all resident individuals a mutant is introduced, and then at each step, an individual is chosen proportional to the fitness of its type to reproduce, and the offspring replaces a neighbor uniformly at random. The process stops when all individuals are either residents or mutants. The probability that all individuals in the end are mutants is called the fixation probability, which is a key factor in the rate of evolution. We consider the problem of approximating the fixation probability. The class of algorithms that is extremely relevant for approximation of the fixation probabilities is the Monte-Carlo simulation of the process. Previous results present a polynomial-time Monte-Carlo algorithm for undirected graphs when $r$ is given in unary. First, we present a simple modification: instead of simulating each step, we discard ineffective steps, where no node changes type (i.e., either residents replace residents, or mutants replace mutants). Using the above simple modification and our result that the number of effective steps is concentrated around the expected number of effective steps, we present faster polynomial-time Monte-Carlo algorithms for undirected graphs. Our algorithms are always at least a factor O(n^2/log n) faster as compared to the previous algorithms, where n is the number of nodes, and is polynomial even if r is given in binary. We also present lower bounds showing that the upper bound on the expected number of effective steps we present is asymptotically tight for undirected graphs.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8121Attainable Values of Reset Thresholds
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8122
An automaton is synchronizing if there exists a word that sends all states of the automaton to a single state. The reset threshold is the length of the shortest such word. We study the set RT_n of attainable reset thresholds by automata with n states. Relying on constructions of digraphs with known local exponents we show that the intervals [1, (n^2-3n+4)/2] and [(p-1)(q-1), p(q-2)+n-q+1], where 2 <= p < q <= n, p+q > n, gcd(p,q)=1, belong to RT_n, even if restrict our attention to strongly connected automata. Moreover, we prove that in this case the smallest value that does not belong to RT_n is at least n^2 - O(n^{1.7625} log n / log log n).This value is increased further assuming certain conjectures about the gaps between consecutive prime numbers.We also show that any value smaller than n(n-1)/2 is attainable by an automaton with a sink state and any value smaller than n^2-O(n^{1.5}) is attainable in general case.Furthermore, we solve the problem of existence of slowly synchronizing automata over an arbitrarily large alphabet, by presenting for every fixed size of the alphabet an infinite series of irreducibly synchronizing automata with the reset threshold n^2-O(n).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8122The Hardness of Solving Simple Word Equations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8123
We investigate the class of regular-ordered word equations. In such equations, each variable occurs at most once in each side and the order of the variables occurring in both left and right hand sides is preserved (the variables can be, however, separated by potentially distinct constant factors). Surprisingly, we obtain that solving such simple equations, even when the sides contain exactly the same variables, is NP-hard. By considerations regarding the combinatorial structure of the minimal solutions of the more general quadratic equations we obtain that the satisfiability problem for regular-ordered equations is in NP. The complexity of solving such word equations under regular constraints is also settled. Finally, we show that a related class of simple word equations, that generalises one-variable equations, is in P.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8123Parameterized Algorithms and Kernels for Rainbow Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8124
In this paper, we study the NP-complete colorful variant of the classical Matching problem, namely, the Rainbow Matching problem. Given an edge-colored graph G and a positive integer k, this problem asks whether there exists a matching of size at least k such that all the edges in the matching have distinct colors. We first develop a deterministic algorithm that solves Rainbow Matching on paths in time O*(((1+\sqrt{5})/2)^k) and polynomial space. This algorithm is based on a curious combination of the method of bounded search trees and a "divide-and-conquer-like" approach, where the branching process is guided by the maintenance of an auxiliary bipartite graph where one side captures "divided-and-conquered" pieces of the path. Our second result is a randomized algorithm that solves Rainbow Matching on general graphs in time O*(2^k) and polynomial space. Here, we show how a result by Björklund et al. [JCSS, 2017] can be invoked as a black box, wrapped by a probability-based analysis tailored to our problem. We also complement our two main results by designing kernels for Rainbow Matching on general and bounded-degree graphs.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8124Being Even Slightly Shallow Makes Life Hard
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8125
We study the computational complexity of identifying dense substructures, namely r/2-shallow topological minors and r-subdivisions. Of particular interest is the case r = 1, when these substructures correspond to very localized relaxations of subgraphs. Since Densest Subgraph can be solved in polynomial time, we ask whether these slight relaxations also admit efficient algorithms.In the following, we provide a negative answer: Dense r/2-Shallow Topological Minor and Dense r-Subdivsion are already NP-hard for r = 1 in very sparse graphs. Further, they do not admit algorithms with running time 2^(o(tw^2)) n^O(1) when parameterized by the treewidth of the input graph for r > 2 unless ETH fails.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8125Compositional Weak Metrics for Group Key Update
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8126
We investigate the compositionality of both weak bisimilarity metric and weak similarity quasi- metric semantics with respect to a variety of standard operators, in the context of probabilistic process algebra. We show how compositionality with respect to nondeterministic and probabilistic choice requires to resort to rooted semantics. As a main application, we demonstrate how our results can be successfully used to conduct compositional reasonings to estimate the performances of group key update protocols in a multicast setting.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8126On the Upward/Downward Closures of Petri Nets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8127
We study the size and the complexity of computing finite state automata (FSA) representing and approximating the downward and the upward closure of Petri net languages with coverability as the acceptance condition. We show how to construct an FSA recognizing the upward closure of a Petri net language in doubly-exponential time, and therefore the size is at most doubly exponential.For downward closures, we prove that the size of the minimal automata can be non-primitive recursive. In the case of BPP nets, a well-known subclass of Petri nets, we show that an FSA accepting the downward/upward closure can be constructed in exponential time. Furthermore, we consider the problem of checking whether a simple regular language is included in the downward/upward closure of a Petri net/BPP net language. We show that this problem is EXPSPACE-complete (resp. NP-complete) in the case of Petri nets (resp. BPP nets).Finally, we show that it is decidable whether a Petri net language is upward/downward closed.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8127Induced Embeddings into Hamming Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8128
Let d be a positive integer. Can a given graph G be realized in R^d so that vertices are mapped to distinct points, two vertices being adjacent if and only if the corresponding points lie on a common line that is parallel to some axis? Graphs admitting such realizations have been studied in the literature for decades under different names. Peterson asked in [Discrete Appl. Math., 2003] about the complexity of the recognition problem. While the two-dimensional case corresponds to the class of line graphs of bipartite graphs and is well-understood, the complexity question has remained open for all higher dimensions.In this paper, we answer this question. We establish the NP-completeness of the recognition problem for any fixed dimension, even in the class of bipartite graphs. To do this, we strengthen a characterization of induced subgraphs of 3-dimensional Hamming graphs due to Klavžar and Peterin. We complement the hardness result by showing that for some important classes of perfect graphs –including chordal graphs and distance-hereditary graphs– the minimum dimension of the Euclidean space in which the graph can be realized, or the impossibility of doing so, can be determined in linear time.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8128The Quantum Monad on Relational Structures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8129
Homomorphisms between relational structures play a central role in finite model theory, constraint satisfaction, and database theory. A central theme in quantum computation is to show how quantum resources can be used to gain advantage in information processing tasks. In particular, non-local games have been used to exhibit quantum advantage in boolean constraint satisfaction, and to obtain quantum versions of graph invariants such as the chromatic number. We show how quantum strategies for homomorphism games between relational structures can be viewed as Kleisli morphisms for a quantum monad on the (classical) category of relational structures and homomorphisms. We use these results to exhibit a wide range of examples of contextuality-powered quantum advantage, and to unify several apparently diverse strands of previous work.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8129Complexity of Restricted Variants of Skolem and Related Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8130
Given a linear recurrence sequence (LRS), the Skolem problem, asks whether it ever becomes zero. The decidability of this problem has been open for several decades. Currently decidability is known only for LRS of order upto 4. For arbitrary orders (i.e., number of terms the n-th depends on), the only known complexity result is NP-hardness by a result of Blondel and Portier from 2002. In this paper, we give a different proof of this hardness result, which is arguably simpler and pinpoints the source of hardness. To demonstrate this, we identify a subclass of LRS for which the Skolem problem is in fact NP-complete. We show the generic nature of our lower-bound technique by adapting it to show stronger lower bounds of a related problem that encompasses many known decision problems on linear recurrent sequences. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8130Distributed Strategies Made Easy
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8131
Distributed/concurrent strategies have been introduced as special maps of event structures. As such they factor through their "rigid images," themselves strategies. By concentrating on such "rigid image" strategies we are able to give an elementary account of distributed strategies and their composition, resulting in a category of games and strategies. This is in contrast to the usual development where composition involves the pullback of event structures explicitly and results in a bicategory. It is shown how, in this simpler setting, to extend strategies to probabilistic strategies; and indicated how through probability we can track nondeterministic branching behaviour, that one might otherwise think lost irrevocably in restricting attention to "rigid image" strategies.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8131Automata in the Category of Glued Vector Spaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8132
In this paper we adopt a category-theoretic approach to the conception of automata classes enjoying minimization by design. The main instantiation of our construction is a new class of automata that are hybrid between deterministic automata and automata weighted over a field.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8132Reversible Kleene lattices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8133
We investigate the equational theory of reversible Kleene lattices, that is algebras of languages with the regular operations (union, composition and Kleene star), together with intersection and mirror image. Building on results by Andréka, Mikulás and Németi from 2011, we construct the free representation of this algebra. We then provide an automaton model to compare representations. These automata are adapted from Petri automata, which we introduced with Pous in 2015 to tackle a similar problem for algebras of binary relations. This allows us to show that testing the validity of equations in this algebra is decidable, and in fact ExpSpace-complete. Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8133The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8134
We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT - the 2CNF Boolean formula satisfiability problem-parameterized by the number of Boolean variables. It is shown that 2SAT with n variables and m clauses can be solved simultaneously polynomial time and (n/2^{c sqrt{log(n)}}) polylog(m+n) space for an absolute constant c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT_3-a restricted variant of 2SAT in which each variable of a given 2CNF formula appears as literals in at most 3 clauses-cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., n^{epsilon} polylog(n) for a certain constant epsilon in (0,1)) space. An immediate consequence of this working hypothesis is L neq NL. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems.For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that overline{2SAT}_3 is complete for a restricted version of NL, called Syntactic NL or simply SNL, under such short reductions. This fact supports the legitimacy of our working hypothesis.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8134On the Exact Amount of Missing Information that Makes Finding Possible Winners Hard
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8135
We consider election scenarios with incomplete information, a situation that arises often in practice. There are several models of incomplete information and accordingly, different notions of outcomes of such elections. In one well-studied model of incompleteness, the votes are given by partial orders over the candidates. In this context we can frame the problem of finding a possible winner, which involves determining whether a given candidate wins in at least one completion of a given set of partial votes for a specific voting rule. The Possible Winner problem is well-known to be NP-Complete in general, and it is in fact known to be NP-Complete for several voting rules where the number of undetermined pairs in every vote is bounded only by some constant. In this paper, we address the question of determining precisely the smallest number of undetermined pairs for which the Possible Winner problem remains NP-Complete. In particular, we find the exact values of t for which the Possible Winner problem transitions to being NP-Complete from being in P, where t is the maximum number of undetermined pairs in every vote. We demonstrate tight results for a broad subclass of scoring rules which includes all the commonly used scoring rules (such as plurality, veto, Borda, and k-approval), Copeland^\alpha for every \alpha in [0,1], maximin, and Bucklin voting rules. A somewhat surprising aspect of our results is that for many of these rules, the Possible Winner problem turns out to be hard even if every vote has at most one undetermined pair of candidates.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8135Temporal Logics for Multi-Agent Systems (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8136
This is an overview of an invited talk delivered during the 42nd International Conference on Mathematical Foundations of Computer Science (MFCS 2017).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8136Computational Complexity of Graph Partition under Vertex-Compaction to an Irreflexive Hexagon
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8137
In this paper, we solve a long-standing graph partition problem under vertex-compaction that has been of interest since about 1999. The graph partition problem that we consider in this paper is to decide whether or not it is possible to partition the vertices of a graph into six distinct non-empty sets A, B, C, D, E, and F, such that the vertices in each set are independent, i.e., there is no edge within any set, and an edge is possible but not necessary only between the pairs of sets A and B, B and C, C and D, D and E, E and F, and F and A, and there is no edge between any other pair of sets. We study the problem as the vertex-compaction problem for an irreflexive hexagon (6-cycle). Determining the computational complexity of this problem has been a long-standing problem of interest since about 1999, especially after the results of open problems obtained by the author on a related compaction problem appeared in 1999. We show in this paper that the vertex-compaction problem for an irreflexive hexagon is NP-complete. Our proof can be extended for larger even irreflexive cycles, showing that the vertex-compaction problem for an irreflexive even k-cycle is NP-complete, for all even k \geq 6.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8137On Definable and Recognizable Properties of Graphs of Bounded Treewidth (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8138
This is an overview of the invited talk delivered at the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017).Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8138Ideal-Based Algorithms for the Symbolic Verification of Well-Structured Systems (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8139
We explain how the downward-closed subsets of a well-quasi-ordering (X,\leq) can be represented via the ideals of X and how this leads to simple and efficient algorithms for the verification of well-structured systems.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8139Domains for Higher-Order Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8140
We study two-player inclusion games played over word-generating higher-order recursion schemes.While inclusion checks are known to capture verification problems, two-player games generalize this relationship to program synthesis.In such games, non-terminals of the grammar are controlled by opposing players.The goal of the existential player is to avoid producing a word that lies outside of a regular language of safe words.We contribute a new domain that provides a representation of the winning region of such games. Our domain is based on (functions over) potentially infinite Boolean formulas with words as atomic propositions. We develop an abstract interpretation framework that we instantiate to abstract this domain into a domain where the propositions are replaced by states of a finite automaton.This second domain is therefore finite and we obtain, via standard fixed-point techniques, a direct algorithm for the analysis of two-player inclusion games. We show, via a second instantiation of the framework, that our finite domain can be optimized, leading to a (k+1)EXP algorithm for order-k recursion schemes. We give a matching lower bound, showing that our approach is optimal. Since our approach is based on standard Kleene iteration, existing techniques and tools for fixed-point computations can be applied.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8140Hardness and Approximation of High-Dimensional Search Problems (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8141
Hardness and Approximation of High-Dimensional Search Problems.Wed, 22 Nov 2017 10:06:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8141