Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082018-06-06http://www.dagstuhl.de/dagpub-rss.phpFront Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9948
Front Matter, Table of Contents, Preface, Conference OrganizationTue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9948Going Beyond Traditional Characterizations in the Age of Big Data and Network Sciences (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9949
What are efficient algorithms? What are network models? Big Data and Network Sciences have fundamentally challenged the traditional polynomial-time characterization of efficiency and the conventional graph-theoretical characterization of networks.More than ever before, it is not just desirable, but essential, that efficient algorithms should be scalable. In other words, their complexity should be nearly linear or sub-linear with respect to the problem size. Thus, scalability, not just polynomial-time computability, should be elevated as the central complexity notion for characterizing efficient computation.For a long time, graphs have been widely used for defining the structure of social and information networks. However, real-world network data and phenomena are much richer and more complex than what can be captured by nodes and edges. Network data are multifaceted, and thus network science requires a new theory, going beyond traditional graph theory, to capture the multifaceted data.In this talk, I discuss some aspects of these challenges. Using basic tasks in network analysis, social influence modeling, and machine learning as examples, I highlight the role of scalable algorithms and axiomatization in shaping our understanding of "effective solution concepts" in data and network sciences, which need to be both mathematically meaningful and algorithmically efficient.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9949Approximate Matchings in Massive Graphs via Local Structure (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9950
Finding a maximum matching is a fundamental algorithmic problem and is fairly well understood in traditional sequential computing models. Some modern applications require that we handle massive graphs and hence we need to consider algorithms in models that do not allow the entire input graph to be held in the memory of one computer, or models in which the graph is evolving over time. We introduce a new concept called an "Edge Degree Constrained Subgraph (EDCS)", which is a subgraph that is guaranteed to contain a large matching, and which can be identified via local conditions. We then show how to use an EDCS to find 1.5-approximate matchings in several different models including Map Reduce, streaming and distributed computing. We can also use an EDCS to maintain a 1.5-optimal matching in a dynamic graph. This work is joint with Sepehr Asadi, Aaron Bernstein, Mohammad Hossein Bateni and Vahab Marrokni.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9950Exploiting Sparsity for Bipartite Hamiltonicity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9951
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9951
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9952
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9952Colouring (P_r+P_s)-Free Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9953
The k-Colouring problem is to decide if the vertices of a graph can be coloured with at most k colours for a fixed integer k such that no two adjacent vertices are coloured alike. If each vertex u must be assigned a colour from a prescribed list L(u) subseteq {1,...,k}, then we obtain the List k-Colouring problem. A graph G is H-free if G does not contain H as an induced subgraph. We continue an extensive study into the complexity of these two problems for H-free graphs. We prove that List 3-Colouring is polynomial-time solvable for (P_2+P_5)-free graphs and for (P_3+P_4)-free graphs. Combining our results with known results yields complete complexity classifications of 3-Colouring and List 3-Colouring on H-free graphs for all graphs H up to seven vertices. We also prove that 5-Colouring is NP-complete for (P_3+P_5)-free graphs.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9953The Use of a Pruned Modular Decomposition for Maximum Matching Algorithms on Some Graph Classes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9954
We address the following general question: given a graph class C on which we can solve Maximum Matching in (quasi) linear time, does the same hold true for the class of graphs that can be modularly decomposed into C? As a way to answer this question for distance-hereditary graphs and some other superclasses of cographs, we study the combined effect of modular decomposition with a pruning process over the quotient subgraphs. We remove sequentially from all such subgraphs their so-called one-vertex extensions (i.e., pendant, anti-pendant, twin, universal and isolated vertices). Doing so, we obtain a "pruned modular decomposition", that can be computed in quasi linear time. Our main result is that if all the pruned quotient subgraphs have bounded order then a maximum matching can be computed in linear time. The latter result strictly extends a recent framework in (Coudert et al., SODA'18). Our work is the first to explain why the existence of some nice ordering over the modules of a graph, instead of just over its vertices, can help to speed up the computation of maximum matchings on some graph classes.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9954A Novel Algorithm for the All-Best-Swap-Edge Problem on Tree Spanners
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9955
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9955Efficient Enumeration of Dominating Sets for Sparse Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9956
A dominating set D of a graph G is a set of vertices such that any vertex in G is in D or its neighbor is in D. Enumeration of minimal dominating sets in a graph is one of central problems in enumeration study since enumeration of minimal dominating sets corresponds to enumeration of minimal hypergraph transversal. However, enumeration of dominating sets including non-minimal ones has not been received much attention. In this paper, we address enumeration problems for dominating sets from sparse graphs which are degenerate graphs and graphs with large girth, and we propose two algorithms for solving the problems. The first algorithm enumerates all the dominating sets for a k-degenerate graph in O(k) time per solution using O(n + m) space, where n and m are respectively the number of vertices and edges in an input graph. That is, the algorithm is optimal for graphs with constant degeneracy such as trees, planar graphs, H-minor free graphs with some fixed H. The second algorithm enumerates all the dominating sets in constant time per solution for input graphs with girth at least nine.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9956Complexity of Unordered CNF Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9957
The classic TQBF problem is to determine who has a winning strategy in a game played on a given CNF formula, where the two players alternate turns picking truth values for the variables in a given order, and the winner is determined by whether the CNF gets satisfied. We study variants of this game in which the variables may be played in any order, and each turn consists of picking a remaining variable and a truth value for it. - For the version where the set of variables is partitioned into two halves and each player may only pick variables from his/her half, we prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for unbounded-width CNFs (Schaefer, STOC 1976). - For the general unordered version (where each variable can be picked by either player), we also prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for 6-CNFs (Ahlroth and Orponen, MFCS 2012) and PSPACE-complete for positive 11-CNFs (Schaefer, STOC 1976).Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9957Half-Duplex Communication Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9958
Suppose Alice and Bob are communicating in order to compute some function f, but instead of a classical communication channel they have a pair of walkie-talkie devices. They can use some classical communication protocol for f where in each round one player sends a bit and the other one receives it. The question is whether talking via walkie-talkie gives them more power? Using walkie-talkies instead of a classical communication channel allows players two extra possibilities: to speak simultaneously (but in this case they do not hear each other) and to listen at the same time (but in this case they do not transfer any bits). The motivation for this kind of a communication model comes from the study of the KRW conjecture. We show that for some definitions this non-classical communication model is, in fact, more powerful than the classical one as it allows to compute some functions in a smaller number of rounds. We also prove lower bounds for these models using both combinatorial and information theoretic methods.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9958On the Complexity of Stable Fractional Hypergraph Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9959
In this paper, we consider the complexity of the problem of finding a stable fractional matching in a hypergraphic preference system. Aharoni and Fleiner proved that there exists a stable fractional matching in every hypergraphic preference system. Furthermore, Kintali, Poplawski, Rajaraman, Sundaram, and Teng proved that the problem of finding a stable fractional matching in a hypergraphic preference system is PPAD-complete. In this paper, we consider the complexity of the problem of finding a stable fractional matching in a hypergraphic preference system whose maximum degree is bounded by some constant. The proof by Kintali, Poplawski, Rajaraman, Sundaram, and Teng implies the PPAD-completeness of the problem of finding a stable fractional matching in a hypergraphic preference system whose maximum degree is 5. In this paper, we prove that (i) this problem is PPAD-complete even if the maximum degree is 3, and (ii) if the maximum degree is 2, then this problem can be solved in polynomial time. Furthermore, we prove that the problem of finding an approximate stable fractional matching in a hypergraphic preference system is PPAD-complete.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9959Deciding the Closure of Inconsistent Rooted Triples Is NP-Complete
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9960
Interpreting three-leaf binary trees or rooted triples as constraints yields an entailment relation, whereby binary trees satisfying some rooted triples must also thus satisfy others, and thence a closure operator, which is known to be polynomial-time computable. This is extended to inconsistent triple sets by defining that a triple is entailed by such a set if it is entailed by any consistent subset of it.Determining whether the closure of an inconsistent rooted triple set can be computed in polynomial time was posed as an open problem in the Isaac Newton Institute's "Phylogenetics" program in 2007. It appears (as NC4) in a collection of such open problems maintained by Mike Steel, and it is the last of that collection's five problems concerning computational complexity to have remained open. We resolve the complexity of computing this closure, proving that its decision version is NP-Complete.In the process, we also prove that detecting the existence of any acyclic B-hyperpath (from specified source to destination) is NP-Complete, in a significantly narrower special case than the version whose minimization problem was recently proven NP-hard by Ritz et al. This implies it is NP-hard to approximate (our special case of) their minimization problem to within any factor.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9960Computing Vertex-Disjoint Paths in Large Graphs Using MAOs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9961
We consider the problem of computing k in N internally vertex-disjoint paths between special vertex pairs of simple connected graphs. For general vertex pairs, the best deterministic time bound is, since 42 years, O(min{k,sqrt{n}}m) for each pair by using traditional flow-based methods.The restriction of our vertex pairs comes from the machinery of maximal adjacency orderings (MAOs). Henzinger showed for every MAO and every 1 <= k <= delta (where delta is the minimum degree of the graph) the existence of k internally vertex-disjoint paths between every pair of the last delta-k+2 vertices of this MAO. Later, Nagamochi generalized this result by using the machinery of mixed connectivity. Both results are however inherently non-constructive.We present the first algorithm that computes these k internally vertex-disjoint paths in linear time O(m), which improves the previously best time O(min{k,sqrt{n}}m). Due to the linear running time, this algorithm is suitable for large graphs. The algorithm is simple, works directly on the MAO structure, and completes a long history of purely existential proofs with a constructive method. We extend our algorithm to compute several other path systems and discuss its impact for certifying algorithms.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9961An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9962
We model evacuation in emergency situations by dynamic flow in a network. We want to minimize the aggregate evacuation time to an evacuation center (called a sink) on a path network with uniform edge capacities. The evacuees are initially located at the vertices, but their precise numbers are unknown, and are given by upper and lower bounds. Under this assumption, we compute a sink location that minimizes the maximum "regret." We present the first sub-cubic time algorithm in n to solve this problem, where n is the number of vertices. Although we cast our problem as evacuation, our result is accurate if the "evacuees" are fluid-like continuous material, but is a good approximation for discrete evacuees.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9962Computing Optimal Shortcuts for Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9963
We study augmenting a plane Euclidean network with a segment, called shortcut, to minimize the largest distance between any two points along the edges of the resulting network. Questions of this type have received considerable attention recently, mostly for discrete variants of the problem. We study a fully continuous setting, where all points on the network and the inserted segment must be taken into account. We present the first results on the computation of optimal shortcuts for general networks in this model, together with several results for networks that are paths, restricted to two types of shortcuts: shortcuts with a fixed orientation and simple shortcuts.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9963Algorithmic Channel Design
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9964
Payment networks, also known as channels, are a most promising solution to the throughput problem of cryptocurrencies. In this paper we study the design of capital-efficient payment networks, offline as well as online variants. We want to know how to compute an efficient payment network topology, how capital should be assigned to the individual edges, and how to decide which transactions to accept. Towards this end, we present a flurry of interesting results, basic but generally applicable insights on the one hand, and hardness results and approximation algorithms on the other hand.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9964Counting Connected Subgraphs with Maximum-Degree-Aware Sieving
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9965
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9965Target Set Selection in Dense Graph Classes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9966
In this paper we study the Target Set Selection problem from a parameterized complexity perspective. Here for a given graph and a threshold for each vertex the task is to find a set of vertices (called a target set) to activate at the beginning which activates the whole graph during the following iterative process. A vertex outside the active set becomes active if the number of so far activated vertices in its neighborhood is at least its threshold.We give two parameterized algorithms for a special case where each vertex has the threshold set to the half of its neighbors (the so called Majority Target Set Selection problem) for parameterizations by the neighborhood diversity and the twin cover number of the input graph.We complement these results from the negative side. We give a hardness proof for the Majority Target Set Selection problem when parameterized by (a restriction of) the modular-width - a natural generalization of both previous structural parameters. We show that the Target Set Selection problem parameterized by the neighborhood diversity when there is no restriction on the thresholds is W[1]-hard.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9966Counting Shortest Two Disjoint Paths in Cubic Planar Graphs with an NC Algorithm
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9967
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9967Data-Compression for Parametrized Counting Problems on Sparse Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9968
We study the concept of compactor, which may be seen as a counting-analogue of kernelization in counting parameterized complexity. For a function F:Sigma^* -> N and a parameterization kappa: Sigma^* -> N, a compactor (P,M) consists of a polynomial-time computable function P, called condenser, and a computable function M, called extractor, such that F=M o P, and the condensing P(x) of x has length at most s(kappa(x)), for any input x in Sigma^*. If s is a polynomial function, then the compactor is said to be of polynomial-size. Although the study on counting-analogue of kernelization is not unprecedented, it has received little attention so far. We study a family of vertex-certified counting problems on graphs that are MSOL-expressible; that is, for an MSOL-formula phi with one free set variable to be interpreted as a vertex subset, we want to count all A subseteq V(G) where |A|=k and (G,A) models phi. In this paper, we prove that every vertex-certified counting problems on graphs that is MSOL-expressible and treewidth modulable, when parameterized by k, admits a polynomial-size compactor on H-topological-minor-free graphs with condensing time O(k^2n^2) and decoding time 2^{O(k)}. This implies the existence of an FPT-algorithm of running time O(n^2 k^2)+2^{O(k)}. All aforementioned complexities are under the Uniform Cost Measure (UCM) model where numbers can be stored in constant space and arithmetic operations can be done in constant time.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9968Planar Maximum Matching: Towards a Parallel Algorithm
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9969
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9969Distributed Approximation Algorithms for the Minimum Dominating Set in K_h-Minor-Free Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9970
In this paper we will give two distributed approximation algorithms (in the Local model) for the minimum dominating set problem. First we will give a distributed algorithm which finds a dominating set D of size O(gamma(G)) in a graph G which has no topological copy of K_h. The algorithm runs L_h rounds where L_h is a constant which depends on h only. This procedure can be used to obtain a distributed algorithm which given epsilon>0 finds in a graph G with no K_h-minor a dominating set D of size at most (1+epsilon)gamma(G). The second algorithm runs in O(log^*{|V(G)|}) rounds.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9970Proving the Turing Universality of Oritatami Co-Transcriptional Folding
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9971
We study the oritatami model for molecular co-transcriptional folding. In oritatami systems, the transcript (the "molecule") folds as it is synthesized (transcribed), according to a local energy optimisation process, which is similar to how actual biomolecules such as RNA fold into complex shapes and functions as they are transcribed. We prove that there is an oritatami system embedding universal computation in the folding process itself. Our result relies on the development of a generic toolbox, which is easily reusable for future work to design complex functions in oritatami systems. We develop "low-level" tools that allow to easily spread apart the encoding of different "functions" in the transcript, even if they are required to be applied at the same geometrical location in the folding. We build upon these low-level tools, a programming framework with increasing levels of abstraction, from encoding of instructions into the transcript to logical analysis. This framework is similar to the hardware-to-algorithm levels of abstractions in standard algorithm theory. These various levels of abstractions allow to separate the proof of correctness of the global behavior of our system, from the proof of correctness of its implementation. Thanks to this framework, we were able to computerise the proof of correctness of its implementation and produce certificates, in the form of a relatively small number of proof trees, compact and easily readable/checkable by human, while encapsulating huge case enumerations. We believe this particular type of certificates can be generalised to other discrete dynamical systems, where proofs involve large case enumerations as well.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9971Cluster Editing in Multi-Layer and Temporal Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9972
Motivated by the recent rapid growth of research for algorithms to cluster multi-layer and temporal graphs, we study extensions of the classical Cluster Editing problem. In Multi-Layer Cluster Editing we receive a set of graphs on the same vertex set, called layers and aim to transform all layers into cluster graphs (disjoint unions of cliques) that differ only slightly. More specifically, we want to mark at most d vertices and to transform each layer into a cluster graph using at most k edge additions or deletions per layer so that, if we remove the marked vertices, we obtain the same cluster graph in all layers. In Temporal Cluster Editing we receive a sequence of layers and we want to transform each layer into a cluster graph so that consecutive layers differ only slightly. That is, we want to transform each layer into a cluster graph with at most k edge additions or deletions and to mark a distinct set of d vertices in each layer so that each two consecutive layers are the same after removing the vertices marked in the first of the two layers. We study the combinatorial structure of the two problems via their parameterized complexity with respect to the parameters d and k, among others. Despite the similar definition, the two problems behave quite differently: In particular, Multi-Layer Cluster Editing is fixed-parameter tractable with running time k^{O(k + d)} s^{O(1)} for inputs of size s, whereas Temporal Cluster Editing is W[1]-hard with respect to k even if d = 3.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9972Parameterized Query Complexity of Hitting Set Using Stability of Sunflowers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9973
In this paper, we study the query complexity of parameterized decision and optimization versions of Hitting-Set. We also investigate the query complexity of Packing. In doing so, we use generalizations to hypergraphs of an earlier query model, known as BIS introduced by Beame et al. in ITCS'18. The query models considered are the GPIS and GPISE oracles. The GPIS and GPISE oracles are used for the decision and optimization versions of the problems, respectively. We use color coding and queries to the oracles to generate subsamples from the hypergraph, that retain some structural properties of the original hypergraph. We use the stability of the sunflowers in a non-trivial way to do so.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9973Approximate Minimum-Weight Matching with Outliers Under Translation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9974
Our goal is to compare two planar point sets by finding subsets of a given size such that a minimum-weight matching between them has the smallest weight. This can be done by a translation of one set that minimizes the weight of the matching. We give efficient algorithms (a) for finding approximately optimal matchings, when the cost of a matching is the L_p-norm of the tuple of the Euclidean distances between the pairs of matched points, for any p in [1,infty], and (b) for constructing small-size approximate minimization (or matching) diagrams: partitions of the translation space into regions, together with an approximate optimal matching for each region.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9974New and Improved Algorithms for Unordered Tree Inclusion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9975
The tree inclusion problem is, given two node-labeled trees P and T (the "pattern tree" and the "text tree"), to locate every minimal subtree in T (if any) that can be obtained by applying a sequence of node insertion operations to P. Although the ordered tree inclusion problem is solvable in polynomial time, the unordered tree inclusion problem is NP-hard. The currently fastest algorithm for the latter is from 1995 and runs in O(poly(m,n) * 2^{2d}) = O^*(2^{2d}) time, where m and n are the sizes of the pattern and text trees, respectively, and d is the maximum outdegree of the pattern tree. Here, we develop a new algorithm that improves the exponent 2d to d by considering a particular type of ancestor-descendant relationships and applying dynamic programming, thus reducing the time complexity to O^*(2^d). We then study restricted variants of the unordered tree inclusion problem where the number of occurrences of different node labels and/or the input trees' heights are bounded. We show that although the problem remains NP-hard in many such cases, it can be solved in polynomial time for c = 2 and in O^*(1.8^d) time for c = 3 if the leaves of P are distinctly labeled and each label occurs at most c times in T. We also present a randomized O^*(1.883^d)-time algorithm for the case that the heights of P and T are one and two, respectively.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9975
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9976
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9976A Dichotomy Result for Cyclic-Order Traversing Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9977
Traversing game is a two-person game played on a connected undirected simple graph with a source node and a destination node. A pebble is placed on the source node initially and then moves autonomously according to some rules. Alice is the player who wants to set up rules for each node to determine where to forward the pebble while the pebble reaches the node, so that the pebble can reach the destination node. Bob is the second player who tries to deter Alice's effort by removing edges. Given access to Alice's rules, Bob can remove as many edges as he likes, while retaining the source and destination nodes connected. Under the guide of Alice's rules, if the pebble arrives at the destination node, then we say Alice wins the traversing game; otherwise the pebble enters an endless loop without passing through the destination node, then Bob wins. We assume that Alice and Bob both play optimally.We study the problem: When will Alice have a winning strategy? This actually models a routing recovery problem in Software Defined Networking in which some links may be broken. In this paper, we prove a dichotomy result for certain traversing games, called cyclic-order traversing games. We also give a linear-time algorithm to find the corresponding winning strategy, if one exists.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9977The b-Matching Problem in Distance-Hereditary Graphs and Beyond
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9978
We make progress on the fine-grained complexity of Maximum-Cardinality Matching on graphs of bounded clique-width. Quasi linear-time algorithms for this problem have been recently proposed for the important subclasses of bounded-treewidth graphs (Fomin et al., SODA'17) and graphs of bounded modular-width (Coudert et al., SODA'18). We present such algorithm for bounded split-width graphs - a broad generalization of graphs of bounded modular-width, of which an interesting subclass are the distance-hereditary graphs. Specifically, we solve Maximum-Cardinality Matching in O((k log^2{k})*(m+n) * log{n})-time on graphs with split-width at most k. We stress that the existence of such algorithm was not even known for distance-hereditary graphs until our work. Doing so, we improve the state of the art (Dragan, WG'97) and we answer an open question of (Coudert et al., SODA'18). Our work brings more insights on the relationships between matchings and splits, a.k.a., join operations between two vertex-subsets in different connected components. Furthermore, our analysis can be extended to the more general (unit cost) b-Matching problem. On the way, we introduce new tools for b-Matching and dynamic programming over split decompositions, that can be of independent interest.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9978
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9979
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9979Computing Approximate Statistical Discrepancy
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9980
Consider a geometric range space (X,A) where X is comprised of the union of a red set R and blue set B. Let Phi(A) define the absolute difference between the fraction of red and fraction of blue points which fall in the range A. The maximum discrepancy range A^* = arg max_{A in (X,A)} Phi(A). Our goal is to find some A^ in (X,A) such that Phi(A^*) - Phi(A^) <= epsilon. We develop general algorithms for this approximation problem for range spaces with bounded VC-dimension, as well as significant improvements for specific geometric range spaces defined by balls, halfspaces, and axis-aligned rectangles. This problem has direct applications in discrepancy evaluation and classification, and we also show an improved reduction to a class of problems in spatial scan statistics.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9980Diversity Maximization in Doubling Metrics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9981
Diversity maximization is an important geometric optimization problem with many applications in recommender systems, machine learning or search engines among others. A typical diversification problem is as follows: Given a finite metric space (X,d) and a parameter k in N, find a subset of k elements of X that has maximum diversity. There are many functions that measure diversity. One of the most popular measures, called remote-clique, is the sum of the pairwise distances of the chosen elements. In this paper, we present novel results on three widely used diversity measures: Remote-clique, remote-star and remote-bipartition.Our main result are polynomial time approximation schemes for these three diversification problems under the assumption that the metric space is doubling. This setting has been discussed in the recent literature. The existence of such a PTAS however was left open.Our results also hold in the setting where the distances are raised to a fixed power q >= 1, giving rise to more variants of diversity functions, similar in spirit to the variations of clustering problems depending on the power applied to the pairwise distances. Finally, we provide a proof of NP-hardness for remote-clique with squared distances in doubling metric spaces.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9981On Polynomial Time Constructions of Minimum Height Decision Tree
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9982
A decision tree T in B_m:={0,1}^m is a binary tree where each of its internal nodes is labeled with an integer in [m]={1,2,...,m}, each leaf is labeled with an assignment a in B_m and each internal node has two outgoing edges that are labeled with 0 and 1, respectively. Let A subset {0,1}^m. We say that T is a decision tree for A if (1) For every a in A there is one leaf of T that is labeled with a. (2) For every path from the root to a leaf with internal nodes labeled with i_1,i_2,...,i_k in[m], a leaf labeled with a in A and edges labeled with xi_{i_1},...,xi_{i_k}in {0,1}, a is the only element in A that satisfies a_{i_j}=xi_{i_j} for all j=1,...,k.Our goal is to write a polynomial time (in n:=|A| and m) algorithm that for an input A subseteq B_m outputs a decision tree for A of minimum depth. This problem has many applications that include, to name a few, computer vision, group testing, exact learning from membership queries and game theory.Arkin et al. and Moshkov [Esther M. Arkin et al., 1998; Mikhail Ju. Moshkov, 2004] gave a polynomial time (ln |A|)- approximation algorithm (for the depth). The result of Dinur and Steurer [Irit Dinur and David Steurer, 2014] for set cover implies that this problem cannot be approximated with ratio (1-o(1))* ln |A|, unless P=NP. Moshkov studied in [Mikhail Ju. Moshkov, 2004; Mikhail Ju. Moshkov, 1982; Mikhail Ju. Moshkov, 1982] the combinatorial measure of extended teaching dimension of A, ETD(A). He showed that ETD(A) is a lower bound for the depth of the decision tree for A and then gave an exponential time ETD(A)/log(ETD(A))-approximation algorithm and a polynomial time 2(ln 2)ETD(A)-approximation algorithm.In this paper we further study the ETD(A) measure and a new combinatorial measure, DEN(A), that we call the density of the set A. We show that DEN(A) <=ETD(A)+1. We then give two results. The first result is that the lower bound ETD(A) of Moshkov for the depth of the decision tree for A is greater than the bounds that are obtained by the classical technique used in the literature. The second result is a polynomial time (ln 2)DEN(A)-approximation (and therefore (ln 2)ETD(A)-approximation) algorithm for the depth of the decision tree of A.We then apply the above results to learning the class of disjunctions of predicates from membership queries [Nader H. Bshouty et al., 2017]. We show that the ETD of this class is bounded from above by the degree d of its Hasse diagram. We then show that Moshkov algorithm can be run in polynomial time and is (d/log d)-approximation algorithm. This gives optimal algorithms when the degree is constant. For example, learning axis parallel rays over constant dimension space.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9982Improved Algorithms for the Shortest Vector Problem and the Closest Vector Problem in the Infinity Norm
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9983
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9983An Adaptive Version of Brandes' Algorithm for Betweenness Centrality
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9984
Betweenness centrality - measuring how many shortest paths pass through a vertex - is one of the most important network analysis concepts for assessing the relative importance of a vertex. The well-known algorithm of Brandes [2001] computes, on an n-vertex and m-edge graph, the betweenness centrality of all vertices in O(nm) worst-case time. In follow-up work, significant empirical speedups were achieved by preprocessing degree-one vertices and by graph partitioning based on cut vertices. We further contribute an algorithmic treatment of degree-two vertices, which turns out to be much richer in mathematical structure than the case of degree-one vertices. Based on these three algorithmic ingredients, we provide a strengthened worst-case running time analysis for betweenness centrality algorithms. More specifically, we prove an adaptive running time bound O(kn), where k < m is the size of a minimum feedback edge set of the input graph.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9984Algorithms for Coloring Reconfiguration Under Recolorability Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9985
Coloring reconfiguration is one of the most well-studied reconfiguration problems. In the problem, we are given two (vertex-)colorings of a graph using at most k colors, and asked to determine whether there exists a transformation between them by recoloring only a single vertex at a time, while maintaining a k-coloring throughout. It is known that this problem is solvable in linear time for any graph if k <=3, while is PSPACE-complete for a fixed k >= 4. In this paper, we further investigate the problem from the viewpoint of recolorability constraints, which forbid some pairs of colors to be recolored directly. More specifically, the recolorability constraint is given in terms of an undirected graph R such that each node in R corresponds to a color, and each edge in R represents a pair of colors that can be recolored directly. In this paper, we give a linear-time algorithm to solve the problem under such a recolorability constraint if R is of maximum degree at most two. In addition, we show that the minimum number of recoloring steps required for a desired transformation can be computed in linear time for a yes-instance. We note that our results generalize the known positive ones for coloring reconfiguration.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9985A Cut Tree Representation for Pendant Pairs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9986
Two vertices v and w of a graph G are called a pendant pair if the maximal number of edge-disjoint paths in G between them is precisely min{d(v),d(w)}, where d denotes the degree function. The importance of pendant pairs stems from the fact that they are the key ingredient in one of the simplest and most widely used algorithms for the minimum cut problem today.Mader showed 1974 that every simple graph with minimum degree delta contains Omega(delta^2) pendant pairs; this is the best bound known so far. We improve this result by showing that every simple graph G with minimum degree delta >= 5 or with edge-connectivity lambda >= 4 or with vertex-connectivity kappa >= 3 contains in fact Omega(delta |V|) pendant pairs. We prove that this bound is tight from several perspectives, and that Omega(delta |V|) pendant pairs can be computed efficiently, namely in linear time when a Gomory-Hu tree is given. Our method utilizes a new cut tree representation of graphs.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9986Polyline Drawings with Topological Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9987
Let G be a simple topological graph and let Gamma be a polyline drawing of G. We say that Gamma partially preserves the topology of G if it has the same external boundary, the same rotation system, and the same set of crossings as G. Drawing Gamma fully preserves the topology of G if the planarization of G and the planarization of Gamma have the same planar embedding. We show that if the set of crossing-free edges of G forms a connected spanning subgraph, then G admits a polyline drawing that partially preserves its topology and that has curve complexity at most three (i.e., at most three bends per edge). If, however, the set of crossing-free edges of G is not a connected spanning subgraph, the curve complexity may be Omega(sqrt{n}). Concerning drawings that fully preserve the topology, we show that if G has skewness k, it admits one such drawing with curve complexity at most 2k; for skewness-1 graphs, the curve complexity can be reduced to one, which is a tight bound. We also consider optimal 2-plane graphs and discuss trade-offs between curve complexity and crossing angle resolution of drawings that fully preserve the topology.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9987Almost Optimal Algorithms for Diameter-Optimally Augmenting Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9988
Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9988Approximation Algorithms for Facial Cycles in Planar Embeddings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9989
Consider the following combinatorial problem: Given a planar graph G and a set of simple cycles C in G, find a planar embedding E of G such that the number of cycles in C that bound a face in E is maximized. This problem, called Max Facial C-Cycles, was first studied by Mutzel and Weiskircher [IPCO '99, http://dx.doi.org/10.1007/3-540-48777-8_27) and then proved NP-hard by Woeginger [Oper. Res. Lett., 2002, http://dx.doi.org/10.1016/S0167-6377(02)00119-0].We establish a tight border of tractability for Max Facial C-Cycles in biconnected planar graphs by giving conditions under which the problem is NP-hard and showing that strengthening any of these conditions makes the problem polynomial-time solvable. Our main results are approximation algorithms for Max Facial C-Cycles. Namely, we give a 2-approximation for series-parallel graphs and a (4+epsilon)-approximation for biconnected planar graphs. Remarkably, this provides one of the first approximation algorithms for constrained embedding problems.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9989An Algorithm for the Maximum Weight Strongly Stable Matching Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9990
An instance of the maximum weight strongly stable matching problem with incomplete lists and ties is an undirected bipartite graph G = (A cup B, E), with an adjacency list being a linearly ordered list of ties, which are vertices equally good for a given vertex. We are also given a weight function w on the set E. An edge (x, y) in E setminus M is a blocking edge for M if by getting matched to each other neither of the vertices x and y would become worse off and at least one of them would become better off. A matching is strongly stable if there is no blocking edge with respect to it. The goal is to compute a strongly stable matching of maximum weight with respect to w.We give a polyhedral characterisation of the problem and prove that the strongly stable matching polytope is integral. This result implies that the maximum weight strongly stable matching problem can be solved in polynomial time. Thereby answering an open question by Gusfield and Irving [Dan Gusfield and Robert W. Irving, 1989]. The main result of this paper is an efficient O(nm log{(Wn)}) time algorithm for computing a maximum weight strongly stable matching, where we denote n = |V|, m = |E| and W is a maximum weight of an edge in G. For small edge weights we show that the problem can be solved in O(nm) time. Note that the fastest known algorithm for the unweighted version of the problem has O(nm) runtime [Telikepalli Kavitha et al., 2007]. Our algorithm is based on the rotation structure which was constructed for strongly stable matchings in [Adam Kunysz et al., 2016].Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9990Approximation Algorithm for Vertex Cover with Multiple Covering Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9991
We consider the vertex cover problem with multiple coverage constraints in hypergraphs. In this problem, we are given a hypergraph G=(V,E) with a maximum edge size f, a cost function w: V - > Z^+, and edge subsets P_1,P_2,...,P_r of E along with covering requirements k_1,k_2,...,k_r for each subset. The objective is to find a minimum cost subset S of V such that, for each edge subset P_i, at least k_i edges of it are covered by S. This problem is a basic yet general form of classical vertex cover problem and a generalization of the edge-partitioned vertex cover problem considered by Bera et al.We present a primal-dual algorithm yielding an (f * H_r + H_r)-approximation for this problem, where H_r is the r^{th} harmonic number. This improves over the previous ratio of (3cf log r), where c is a large constant used to ensure a low failure probability for Monte-Carlo randomized algorithms. Compared to previous result, our algorithm is deterministic and pure combinatorial, meaning that no Ellipsoid solver is required for this basic problem. Our result can be seen as a novel reinterpretation of a few classical tight results using the language of LP primal-duality.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9991Correlation Clustering Generalized
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9992
We present new results for LambdaCC and MotifCC, two recently introduced variants of the well-studied correlation clustering problem. Both variants are motivated by applications to network analysis and community detection, and have non-trivial approximation algorithms.We first show that the standard linear programming relaxation of LambdaCC has a Theta(log n) integrality gap for a certain choice of the parameter lambda. This sheds light on previous challenges encountered in obtaining parameter-independent approximation results for LambdaCC. We generalize a previous constant-factor algorithm to provide the best results, from the LP-rounding approach, for an extended range of lambda.MotifCC generalizes correlation clustering to the hypergraph setting. In the case of hyperedges of degree 3 with weights satisfying probability constraints, we improve the best approximation factor from 9 to 8. We show that in general our algorithm gives a 4(k-1) approximation when hyperedges have maximum degree k and probability weights. We additionally present approximation results for LambdaCC and MotifCC where we restrict to forming only two clusters.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9992Partitioning Vectors into Quadruples: Worst-Case Analysis of a Matching-Based Algorithm
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9993
Consider a problem where 4k given vectors need to be partitioned into k clusters of four vectors each. A cluster of four vectors is called a quad, and the cost of a quad is the sum of the component-wise maxima of the four vectors in the quad. The problem is to partition the given 4k vectors into k quads with minimum total cost. We analyze a straightforward matching-based algorithm and prove that this algorithm is a 3/2-approximation algorithm for this problem. We further analyze the performance of this algorithm on a hierarchy of special cases of the problem and prove that, in one particular case, the algorithm is a 5/4-approximation algorithm. Our analysis is tight in all cases except one.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9993Coresets for Fuzzy K-Means with Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9994
The fuzzy K-means problem is a popular generalization of the well-known K-means problem to soft clusterings. We present the first coresets for fuzzy K-means with size linear in the dimension, polynomial in the number of clusters, and poly-logarithmic in the number of points. We show that these coresets can be employed in the computation of a (1+epsilon)-approximation for fuzzy K-means, improving previously presented results. We further show that our coresets can be maintained in an insertion-only streaming setting, where data points arrive one-by-one.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9994Streaming Algorithms for Planar Convex Hulls
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9995
Many classical algorithms are known for computing the convex hull of a set of n point in R^2 using O(n) space. For large point sets, whose size exceeds the size of the working space, these algorithms cannot be directly used. The current best streaming algorithm for computing the convex hull is computationally expensive, because it needs to solve a set of linear programs.In this paper, we propose simpler and faster streaming and W-stream algorithms for computing the convex hull. Our streaming algorithm has small pass complexity, which is roughly a square root of the current best bound, and it is simpler in the sense that our algorithm mainly relies on computing the convex hulls of smaller point sets. Our W-stream algorithms, one of which is deterministic and the other of which is randomized, have nearly-optimal tradeoff between the pass complexity and space usage, as we established by a new unconditional lower bound.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9995Deterministic Treasure Hunt in the Plane with Angular Hints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9996
A mobile agent equipped with a compass and a measure of length has to find an inert treasure in the Euclidean plane. Both the agent and the treasure are modeled as points. In the beginning, the agent is at a distance at most D>0 from the treasure, but knows neither the distance nor any bound on it. Finding the treasure means getting at distance at most 1 from it. The agent makes a series of moves. Each of them consists in moving straight in a chosen direction at a chosen distance. In the beginning and after each move the agent gets a hint consisting of a positive angle smaller than 2 pi whose vertex is at the current position of the agent and within which the treasure is contained. We investigate the problem of how these hints permit the agent to lower the cost of finding the treasure, using a deterministic algorithm, where the cost is the worst-case total length of the agent's trajectory. It is well known that without any hint the optimal (worst case) cost is Theta(D^2). We show that if all angles given as hints are at most pi, then the cost can be lowered to O(D), which is optimal. If all angles are at most beta, where beta<2 pi is a constant unknown to the agent, then the cost is at most O(D^{2-epsilon}), for some epsilon>0. For both these positive results we present deterministic algorithms achieving the above costs. Finally, if angles given as hints can be arbitrary, smaller than 2 pi, then we show that cost Theta(D^2) cannot be beaten.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9996Competitive Searching for a Line on a Line Arrangement
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9997
We discuss the problem of searching for an unknown line on a known or unknown line arrangement by a searcher S, and show that a search strategy exists that finds the line competitively, that is, with detour factor at most a constant when compared to the situation where S has all knowledge. In the case where S knows all lines but not which one is sought, the strategy is 79-competitive. We also show that it may be necessary to travel on Omega(n) lines to realize a constant competitive ratio. In the case where initially, S does not know any line, but learns about the ones it encounters during the search, we give a 414.2-competitive search strategy.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9997Stabbing Pairwise Intersecting Disks by Five Points
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9998
Suppose we are given a set D of n pairwise intersecting disks in the plane. A planar point set P stabs D if and only if each disk in D contains at least one point from P. We present a deterministic algorithm that takes O(n) time to find five points that stab D. Furthermore, we give a simple example of 13 pairwise intersecting disks that cannot be stabbed by three points.This provides a simple - albeit slightly weaker - algorithmic version of a classical result by Danzer that such a set D can always be stabbed by four points.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9998Point Location in Incremental Planar Subdivisions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9999
We study the point location problem in incremental (possibly disconnected) planar subdivisions, that is, dynamic subdivisions allowing insertions of edges and vertices only. Specifically, we present an O(n log n)-space data structure for this problem that supports queries in O(log^2 n) time and updates in O(log n log log n) amortized time. This is the first result that achieves polylogarithmic query and update times simultaneously in incremental planar subdivisions. Its update time is significantly faster than the update time of the best known data structure for fully-dynamic (possibly disconnected) planar subdivisions.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9999Convex Partial Transversals of Planar Regions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10000
We consider the problem of testing, for a given set of planar regions R and an integer k, whether there exists a convex shape whose boundary intersects at least k regions of R. We provide polynomial-time algorithms for the case where the regions are disjoint axis-aligned rectangles or disjoint line segments with a constant number of orientations. On the other hand, we show that the problem is NP-hard when the regions are intersecting axis-aligned rectangles or 3-oriented line segments. For several natural intermediate classes of shapes (arbitrary disjoint segments, intersecting 2-oriented segments) the problem remains open.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10000Extending the Centerpoint Theorem to Multiple Points
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10001
The centerpoint theorem is a well-known and widely used result in discrete geometry. It states that for any point set P of n points in R^d, there is a point c, not necessarily from P, such that each halfspace containing c contains at least n/(d+1) points of P. Such a point c is called a centerpoint, and it can be viewed as a generalization of a median to higher dimensions. In other words, a centerpoint can be interpreted as a good representative for the point set P. But what if we allow more than one representative? For example in one-dimensional data sets, often certain quantiles are chosen as representatives instead of the median.We present a possible extension of the concept of quantiles to higher dimensions. The idea is to find a set Q of (few) points such that every halfspace that contains one point of Q contains a large fraction of the points of P and every halfspace that contains more of Q contains an even larger fraction of P. This setting is comparable to the well-studied concepts of weak epsilon-nets and weak epsilon-approximations, where it is stronger than the former but weaker than the latter. We show that for any point set of size n in R^d and for any positive alpha_1,...,alpha_k where alpha_1 <= alpha_2 <= ... <= alpha_k and for every i,j with i+j <= k+1 we have that (d-1)alpha_k+alpha_i+alpha_j <= 1, we can find Q of size k such that each halfspace containing j points of Q contains least alpha_j n points of P. For two-dimensional point sets we further show that for every alpha and beta with alpha <= beta and alpha+beta <= 2/3 we can find Q with |Q|=3 such that each halfplane containing one point of Q contains at least alpha n of the points of P and each halfplane containing all of Q contains at least beta n points of P. All these results generalize to the setting where P is any mass distribution. For the case where P is a point set in R^2 and |Q|=2, we provide algorithms to find such points in time O(n log^3 n).Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10001Approximate Query Processing over Static Sets and Sliding Windows
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10002
Indexing of static and dynamic sets is fundamental to a large set of applications such as information retrieval and caching. Denoting the characteristic vector of the set by B, we consider the problem of encoding sets and multisets to support approximate versions of the operations rank(i) (i.e., computing sum_{j <= i} B[j]) and select(i) (i.e., finding min{p|rank(p) >= i}) queries. We study multiple types of approximations (allowing an error in the query or the result) and present lower bounds and succinct data structures for several variants of the problem. We also extend our model to sliding windows, in which we process a stream of elements and compute suffix sums. This is a generalization of the window summation problem that allows the user to specify the window size at query time. Here, we provide an algorithm that supports updates and queries in constant time while requiring just (1+o(1)) factor more space than the fixed-window summation algorithms.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10002Multi-Finger Binary Search Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10003
We study multi-finger binary search trees (BSTs), a far-reaching extension of the classical BST model, with connections to the well-studied k-server problem. Finger search is a popular technique for speeding up BST operations when a query sequence has locality of reference. BSTs with multiple fingers can exploit more general regularities in the input. In this paper we consider the cost of serving a sequence of queries in an optimal (offline) BST with k fingers, a powerful benchmark against which other algorithms can be measured.We show that the k-finger optimum can be matched by a standard dynamic BST (having a single root-finger) with an O(log{k}) factor overhead. This result is tight for all k, improving the O(k) factor implicit in earlier work. Furthermore, we describe new online BSTs that match this bound up to a (log{k})^{O(1)} factor. Previously only the "one-finger" special case was known to hold for an online BST (Iacono, Langerman, 2016; Cole et al., 2000). Splay trees, assuming their conjectured optimality (Sleator and Tarjan, 1983), would have to match our bounds for all k.Our online algorithms are randomized and combine techniques developed for the k-server problem with a multiplicative-weights scheme for learning tree metrics. To our knowledge, this is the first time when tools developed for the k-server problem are used in BSTs. As an application of our k-finger results, we show that BSTs can efficiently serve queries that are close to some recently accessed item. This is a (restricted) form of the unified property (Iacono, 2001) that was previously not known to hold for any BST algorithm, online or offline.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10003On Counting Oracles for Path Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10004
We initiate the study of counting oracles for various path problems in graphs. Distance oracles have gained a lot of attention in recent years, with studies of the underlying space and time tradeoffs. For a given graph G, a distance oracle is a data structure which can be used to answer distance queries for pairs of vertices s,t in V(G). In this work, we extend the set up to answering counting queries: for a pair of vertices s,t, the oracle needs to provide the number of (shortest or all) paths from s to t. We present O(n^{1.5}) preprocessing time, O(n^{1.5}) space, and O(sqrt{n}) query time algorithms for oracles counting shortest paths in planar graphs and for counting all paths in planar directed acyclic graphs. We extend our results to other graphs which admit small balanced separators and present applications where our oracle improves the currently best known running times.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10004Reconstructing Phylogenetic Tree From Multipartite Quartet System
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10005
A phylogenetic tree is a graphical representation of an evolutionary history in a set of taxa in which the leaves correspond to taxa and the non-leaves correspond to speciations. One of important problems in phylogenetic analysis is to assemble a global phylogenetic tree from smaller pieces of phylogenetic trees, particularly, quartet trees. Quartet Compatibility is to decide whether there is a phylogenetic tree inducing a given collection of quartet trees, and to construct such a phylogenetic tree if it exists. It is known that Quartet Compatibility is NP-hard but there are only a few results known for polynomial-time solvable subclasses.In this paper, we introduce two novel classes of quartet systems, called complete multipartite quartet system and full multipartite quartet system, and present polynomial time algorithms for Quartet Compatibility for these systems. We also see that complete/full multipartite quartet systems naturally arise from a limited situation of block-restricted measurement.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10005Rectilinear Link Diameter and Radius in a Rectilinear Polygonal Domain
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10006
We study the computation of the diameter and radius under the rectilinear link distance within a rectilinear polygonal domain of n vertices and h holes. We introduce a graph of oriented distances to encode the distance between pairs of points of the domain. This helps us transform the problem so that we can search through the candidates more efficiently. Our algorithm computes both the diameter and the radius in O(min(n^omega, n^2 + nh log h + chi^2)) time, where omega<2.373 denotes the matrix multiplication exponent and chi in Omega(n) cap O(n^2) is the number of edges of the graph of oriented distances. We also provide an alternative algorithm for computing the diameter that runs in O(n^2 log n) time.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10006Minimizing Distance-to-Sight in Polygonal Domains
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10007
In this paper, we consider the quickest pair-visibility problem in polygonal domains. Given two points in a polygonal domain with h holes of total complexity n, we want to minimize the maximum distance that the two points travel in order to see each other in the polygonal domain. We present an O(n log^2 n+h^2 log^4 h)-time algorithm for this problem. We show that this running time is almost optimal unless the 3sum problem can be solved in O(n^{2-epsilon}) time for some epsilon>0.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10007Partially Walking a Polygon
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10008
Deciding two-guard walkability of an n-sided polygon is a well-understood problem. We study the following more general question: How far can two guards reach from a given source vertex while staying mutually visible, in the (more realistic) case that the polygon is not entirely walkable? There can be Theta(n) such maximal walks, and we show how to find all of them in O(n log n) time.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10008Stabbing Rectangles by Line Segments - How Decomposition Reduces the Shallow-Cell Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10009
We initiate the study of the following natural geometric optimization problem. The input is a set of axis-aligned rectangles in the plane. The objective is to find a set of horizontal line segments of minimum total length so that every rectangle is stabbed by some line segment. A line segment stabs a rectangle if it intersects its left and its right boundary. The problem, which we call Stabbing, can be motivated by a resource allocation problem and has applications in geometric network design. To the best of our knowledge, only special cases of this problem have been considered so far.Stabbing is a weighted geometric set cover problem, which we show to be NP-hard. While for general set cover the best possible approximation ratio is Theta(log n), it is an important field in geometric approximation algorithms to obtain better ratios for geometric set cover problems. Chan et al. [SODA'12] generalize earlier results by Varadarajan [STOC'10] to obtain sub-logarithmic performances for a broad class of weighted geometric set cover instances that are characterized by having low shallow-cell complexity. The shallow-cell complexity of Stabbing instances, however, can be high so that a direct application of the framework of Chan et al. gives only logarithmic bounds. We still achieve a constant-factor approximation by decomposing general instances into what we call laminar instances that have low enough complexity.Our decomposition technique yields constant-factor approximations also for the variant where rectangles can be stabbed by horizontal and vertical segments and for two further geometric set cover problems.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10009Impatient Online Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10010
We investigate the problem of Min-cost Perfect Matching with Delays (MPMD) in which requests are pairwise matched in an online fashion with the objective to minimize the sum of space cost and time cost. Though linear-MPMD (i.e., time cost is linear in delay) has been thoroughly studied in the literature, it does not well model impatient requests that are common in practice. Thus, we propose convex-MPMD where time cost functions are convex, capturing the situation where time cost increases faster and faster. Since the existing algorithms for linear-MPMD are not competitive any more, we devise a new deterministic algorithm for convex-MPMD problems. For a large class of convex time cost functions, our algorithm achieves a competitive ratio of O(k) on any k-point uniform metric space. Moreover, our deterministic algorithm is asymptotically optimal, which uncover a substantial difference between convex-MPMD and linear-MPMD which allows a deterministic algorithm with constant competitive ratio on any uniform metric space.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10010Extensions of Self-Improving Sorters
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10011
Ailon et al. (SICOMP 2011) proposed a self-improving sorter that tunes its performance to the unknown input distribution in a training phase. The distribution of the input numbers x_1,x_2,...,x_n must be of the product type, that is, each x_i is drawn independently from an arbitrary distribution D_i, and the D_i's are independent of each other. We study two extensions that relax this requirement. The first extension models hidden classes in the input. We consider the case that numbers in the same class are governed by linear functions of the same hidden random parameter. The second extension considers a hidden mixture of product distributions.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10011Online Scheduling of Car-Sharing Requests Between Two Locations with Many Cars and Flexible Advance Bookings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10012
We study an on-line scheduling problem that is motivated by applications such as car-sharing, in which users submit ride requests, and the scheduler aims to accept requests of maximum total profit using k servers (cars). Each ride request specifies the pick-up time and the pick-up location (among two locations, with the other location being the destination). The scheduler has to decide whether or not to accept a request immediately at the time when the request is submitted (booking time). We consider two variants of the problem with respect to constraints on the booking time: In the fixed booking time variant, a request must be submitted a fixed amount of time before the pick-up time. In the variable booking time variant, a request can be submitted at any time during a certain time interval (called the booking horizon) that precedes the pick-up time. We present lower bounds on the competitive ratio for both variants and propose a balanced greedy algorithm (BGA) that achieves the best possible competitive ratio. We prove that, for the fixed booking time variant, BGA is 1.5-competitive if k=3i ( i in N) and the fixed booking length is not less than the travel time between the two locations; for the variable booking time variant, BGA is 1.5-competitive if k=3i ( i in N) and the length of the booking horizon is less than the travel time between the two locations, and BGA is 5/3-competitive if k=5i ( i in N) and the length of the booking horizon is not less than the travel time between the two locations.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10012Packing Returning Secretaries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10013
We study online secretary problems with returns in combinatorial packing domains with n candidates that arrive sequentially over time in random order. The goal is to accept a feasible packing of candidates of maximum total value. In the first variant, each candidate arrives exactly twice. All 2n arrivals occur in random order. We propose a simple 0.5-competitive algorithm that can be combined with arbitrary approximation algorithms for the packing domain, even when the total value of candidates is a subadditive function. For bipartite matching, we obtain an algorithm with competitive ratio at least 0.5721 - o(1) for growing n, and an algorithm with ratio at least 0.5459 for all n >= 1. We extend all algorithms and ratios to k >= 2 arrivals per candidate.In the second variant, there is a pool of undecided candidates. In each round, a random candidate from the pool arrives. Upon arrival a candidate can be either decided (accept/reject) or postponed (returned into the pool). We mainly focus on minimizing the expected number of postponements when computing an optimal solution. An expected number of Theta(n log n) is always sufficient. For matroids, we show that the expected number can be reduced to O(r log (n/r)), where r <=n/2 is the minimum of the ranks of matroid and dual matroid. For bipartite matching, we show a bound of O(r log n), where r is the size of the optimum matching. For general packing, we show a lower bound of Omega(n log log n), even when the size of the optimum is r = Theta(log n).Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10013Simple 2^f-Color Choice Dictionaries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10014
A c-color choice dictionary of size n in N is a fundamental data structure in the development of space-efficient algorithms that stores the colors of n elements and that supports operations to get and change the color of an element as well as an operation choice that returns an arbitrary element of that color. For an integer f>0 and a constant c=2^f, we present a word-RAM algorithm for a c-color choice dictionary of size n that supports all operations above in constant time and uses only nf+1 bits, which is optimal if all operations have to run in o(n/w) time where w is the word size.In addition, we extend our choice dictionary by an operation union without using more space.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10014Succinct Data Structures for Chordal Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10015
We study the problem of approximate shortest path queries in chordal graphs and give a n log n + o(n log n) bit data structure to answer the approximate distance query to within an additive constant of 1 in O(1) time.We study the problem of succinctly storing a static chordal graph to answer adjacency, degree, neighbourhood and shortest path queries. Let G be a chordal graph with n vertices. We design a data structure using the information theoretic minimal n^2/4 + o(n^2) bits of space to support the queries: - whether two vertices u,v are adjacent in time f(n) for any f(n) in omega(1). - the degree of a vertex in O(1) time. - the vertices adjacent to u in (f(n))^2 time per neighbour - the length of the shortest path from u to v in O(nf(n)) timeTue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10015Tree Path Majority Data Structures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10016
We present the first solution to tau-majorities on tree paths. Given a tree of n nodes, each with a label from [1..sigma], and a fixed threshold 0<tau<1, such a query gives two nodes u and v and asks for all the labels that appear more than tau * |P_{uv}| times in the path P_{uv} from u to v, where |P_{uv}| denotes the number of nodes in P_{uv}. Note that the answer to any query is of size up to 1/tau. On a w-bit RAM, we obtain a linear-space data structure with O((1/tau)lg^* n lg lg_w sigma) query time. For any kappa > 1, we can also build a structure that uses O(n lg^{[kappa]} n) space, where lg^{[kappa]} n denotes the function that applies logarithm kappa times to n, and answers queries in time O((1/tau)lg lg_w sigma). The construction time of both structures is O(n lg n). We also describe two succinct-space solutions with the same query time of the linear-space structure. One uses 2nH + 4n + o(n)(H+1) bits, where H <=lg sigma is the entropy of the label distribution, and can be built in O(n lg n) time. The other uses nH + O(n) + o(nH) bits and is built in O(n lg n) time w.h.p.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10016Encoding Two-Dimensional Range Top-k Queries Revisited
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10017
We consider the problem of encoding two-dimensional arrays, whose elements come from a total order, for answering Top-k queries. The aim is to obtain encodings that use space close to the information-theoretic lower bound, which can be constructed efficiently. For 2 x n arrays, we first give upper and lower bounds on space for answering sorted and unsorted 3-sided Top-k queries. For m x n arrays, with m <=n and k <=mn, we obtain (m lg{(k+1)n choose n}+4nm(m-1)+o(n))-bit encoding for answering sorted 4-sided Top-k queries. This improves the min{(O(mn lg{n}),m^2 lg{(k+1)n choose n} + m lg{m}+o(n))}-bit encoding of Jo et al. [CPM, 2016] when m = o(lg{n}). This is a consequence of a new encoding that encodes a 2 x n array to support sorted 4-sided Top-k queries on it using an additional 4n bits, in addition to the encodings to support the Top-k queries on individual rows. This new encoding is a non-trivial generalization of the encoding of Jo et al. [CPM, 2016] that supports sorted 4-sided Top-2 queries on it using an additional 3n bits. We also give almost optimal space encodings for 3-sided Top-k queries, and show lower bounds on encodings for 3-sided and 4-sided Top-k queries.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10017Longest Unbordered Factor in Quasilinear Time
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10018
A border u of a word w is a proper factor of w occurring both as a prefix and as a suffix. The maximal unbordered factor of w is the longest factor of w which does not have a border. Here an O(n log n)-time with high probability (or O(n log n log^2 log n)-time deterministic) algorithm to compute the Longest Unbordered Factor Array of w for general alphabets is presented, where n is the length of w. This array specifies the length of the maximal unbordered factor starting at each position of w. This is a major improvement on the running time of the currently best worst-case algorithm working in O(n^{1.5}) time for integer alphabets [Gawrychowski et al., 2015].Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10018Packing Sporadic Real-Time Tasks on Identical Multiprocessor Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10019
In real-time systems, in addition to the functional correctness recurrent tasks must fulfill timing constraints to ensure the correct behavior of the system. Partitioned scheduling is widely used in real-time systems, i.e., the tasks are statically assigned onto processors while ensuring that all timing constraints are met. The decision version of the problem, which is to check whether the deadline constraints of tasks can be satisfied on a given number of identical processors, has been known NP-complete in the strong sense. Several studies on this problem are based on approximations involving resource augmentation, i.e., speeding up individual processors. This paper studies another type of resource augmentation by allocating additional processors, a topic that has not been explored until recently. We provide polynomial-time algorithms and analysis, in which the approximation factors are dependent upon the input instances. Specifically, the factors are related to the maximum ratio of the period to the relative deadline of a task in the given task set. We also show that these algorithms unfortunately cannot achieve a constant approximation factor for general cases. Furthermore, we prove that the problem does not admit any asymptotic polynomial-time approximation scheme (APTAS) unless P=NP when the task set has constrained deadlines, i.e., the relative deadline of a task is no more than the period of the task.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10019A Relaxed FPTAS for Chance-Constrained Knapsack
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10020
The stochastic knapsack problem is a stochastic version of the well known deterministic knapsack problem, in which some of the input values are random variables. There are several variants of the stochastic problem. In this paper we concentrate on the chance-constrained variant, where item values are deterministic and item sizes are stochastic. The goal is to find a maximum value allocation subject to the constraint that the overflow probability is at most a given value. Previous work showed a PTAS for the problem for various distributions (Poisson, Exponential, Bernoulli and Normal). Some strictly respect the constraint and some relax the constraint by a factor of (1+epsilon). All algorithms use Omega(n^{1/epsilon}) time. A very recent work showed a "almost FPTAS" algorithm for Bernoulli distributions with O(poly(n) * quasipoly(1/epsilon)) time.In this paper we present a FPTAS for normal distributions with a solution that satisfies the chance constraint in a relaxed sense. The normal distribution is particularly important, because by the Berry-Esseen theorem, an algorithm solving the normal distribution also solves, under mild conditions, arbitrary independent distributions. To the best of our knowledge, this is the first (relaxed or non-relaxed) FPTAS for the problem. In fact, our algorithm runs in poly(n/epsilon) time. We achieve the FPTAS by a delicate combination of previous techniques plus a new alternative solution to the non-heavy elements that is based on a non-convex program with a simple structure and an O(n^2 log {n/epsilon}) running time. We believe this part is also interesting on its own right.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10020Covering Clients with Types and Budgets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10021
In this paper, we consider a variant of the facility location problem. Imagine the scenario where facilities are categorized into multiple types such as schools, hospitals, post offices, etc. and the cost of connecting a client to a facility is realized by the distance between them. Each client has a total budget on the distance she/he is willing to travel. The goal is to open the minimum number of facilities such that the aggregate distance of each client to multiple types is within her/his budget. This problem closely resembles to the set cover and r-domination problems. Here, we study this problem in different settings. Specifically, we present some positive and negative results in the general setting, where no assumption is made on the distance values. Then we show that better results can be achieved when clients and facilities lie in a metric space.Tue, 27 Nov 2018 11:07:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10021Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9899
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9899Random Testing for Distributed Systems with Theoretical Guarantees (Invited Paper)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9900
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9900Model Checking Randomized Security Protocols (Invited Paper)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9901
The design of security protocols is extremely subtle and is prone to serious faults. Many tools for automatic analysis of such protocols have been developed. However, none of them have the ability to model protocols that use explicit randomization. Such randomized protocols are being increasingly used in systems to provide privacy and anonymity guarantees. In this talk we consider the problem of automatic verification of randomized security protocols. We consider verification of secrecy and indistinguishability properties under a powerful threat model of Dolev-Yao adversary. We present some complexity bounds on verification of these properties. We also describe practical algorithms for checking indistinguishability. These algorithms have been implemented in the tool SPAN and have been experimentally evaluated. The talk concludes with future challenges.(Joint work with: Matt Bauer, Rohit Chadha and Mahesh Viswanathan)Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9901Algorithms for the Asymmetric Traveling Salesman Problem (Invited Paper)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9902
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9902Continuous Algorithms (Invited Paper)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9903
While the design of algorithms is traditionally a discrete endeavour, in recent years many advances have come from continuous perspectives. Typically, a continuous process, deterministic or randomized, is designed and shown to have desirable properties, such as approaching an optimal solution or a target distribution, and an algorithm is derived from this by appropriate discretization. We will discuss examples of this for optimization (gradient descent, interior-point method) and sampling (Brownian motion, Hamiltonian Monte Carlo), with applications to learning. In some interesting and rather general settings, the current fastest methods have been obtained via this approach.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9903On the Probabilistic Degree of OR over the Reals
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9904
We study the probabilistic degree over R of the OR function on n variables. For epsilon in (0,1/3), the epsilon-error probabilistic degree of any Boolean function f:{0,1}^n -> {0,1} over R is the smallest non-negative integer d such that the following holds: there exists a distribution of polynomials Pol in R[x_1,...,x_n] entirely supported on polynomials of degree at most d such that for all z in {0,1}^n, we have Pr_{P ~ Pol}[P(z) = f(z)] >= 1- epsilon. It is known from the works of Tarui (Theoret. Comput. Sci. 1993) and Beigel, Reingold, and Spielman (Proc. 6th CCC 1991), that the epsilon-error probabilistic degree of the OR function is at most O(log n * log(1/epsilon)). Our first observation is that this can be improved to O{log (n atop <= log(1/epsilon))}, which is better for small values of epsilon.In all known constructions of probabilistic polynomials for the OR function (including the above improvement), the polynomials P in the support of the distribution Pol have the following special structure: P(x_1,...,x_n) = 1 - prod_{i in [t]} (1- L_i(x_1,...,x_n)), where each L_i(x_1,..., x_n) is a linear form in the variables x_1,...,x_n, i.e., the polynomial 1-P(bar{x}) is a product of affine forms. We show that the epsilon-error probabilistic degree of OR when restricted to polynomials of the above form is Omega(log (n over <= log(1/epsilon))/log^2 (log (n over <= log(1/epsilon))})), thus matching the above upper bound (up to polylogarithmic factors).Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9904Quasipolynomial Hitting Sets for Circuits with Restricted Parse Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9905
We study the class of non-commutative Unambiguous circuits or Unique-Parse-Tree (UPT) circuits, and a related model of Few-Parse-Trees (FewPT) circuits (which were recently introduced by Lagarde, Malod and Perifel [Guillaume Lagarde et al., 2016] and Lagarde, Limaye and Srinivasan [Guillaume Lagarde et al., 2017]) and give the following constructions: - An explicit hitting set of quasipolynomial size for UPT circuits, - An explicit hitting set of quasipolynomial size for FewPT circuits (circuits with constantly many parse tree shapes), - An explicit hitting set of polynomial size for UPT circuits (of known parse tree shape), when a parameter of preimage-width is bounded by a constant. The above three results are extensions of the results of [Manindra Agrawal et al., 2015], [Rohit Gurjar et al., 2015] and [Rohit Gurjar et al., 2016] to the setting of UPT circuits, and hence also generalize their results in the commutative world from read-once oblivious algebraic branching programs (ROABPs) to UPT-set-multilinear circuits.The main idea is to study shufflings of non-commutative polynomials, which can then be used to prove suitable depth reduction results for UPT circuits and thereby allow a careful translation of the ideas in [Manindra Agrawal et al., 2015], [Rohit Gurjar et al., 2015] and [Rohit Gurjar et al., 2016].Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9905Univariate Ideal Membership Parameterized by Rank, Degree, and Number of Generators
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9906
Let F[X] be the polynomial ring over the variables X={x_1,x_2, ..., x_n}. An ideal I= <p_1(x_1), ..., p_n(x_n)> generated by univariate polynomials {p_i(x_i)}_{i=1}^n is a univariate ideal. We study the ideal membership problem for the univariate ideals and show the following results.- Let f(X) in F[l_1, ..., l_r] be a (low rank) polynomial given by an arithmetic circuit where l_i : 1 <= i <= r are linear forms, and I=<p_1(x_1), ..., p_n(x_n)> be a univariate ideal. Given alpha in F^n, the (unique) remainder f(X) mod I can be evaluated at alpha in deterministic time d^{O(r)} * poly(n), where d=max {deg(f),deg(p_1)...,deg(p_n)}. This yields a randomized n^{O(r)} algorithm for minimum vertex cover in graphs with rank-r adjacency matrices. It also yields an n^{O(r)} algorithm for evaluating the permanent of a n x n matrix of rank r, over any field F. Over Q, an algorithm of similar run time for low rank permanent is due to Barvinok [Barvinok, 1996] via a different technique.- Let f(X)in F[X] be given by an arithmetic circuit of degree k (k treated as fixed parameter) and I=<p_1(x_1), ..., p_n(x_n)>. We show that in the special case when I=<x_1^{e_1}, ..., x_n^{e_n}>, we obtain a randomized O^*(4.08^k) algorithm that uses poly(n,k) space.- Given f(X)in F[X] by an arithmetic circuit and I=<p_1(x_1), ..., p_k(x_k)>, membership testing is W[1]-hard, parameterized by k. The problem is MINI[1]-hard in the special case when I=<x_1^{e_1}, ..., x_k^{e_k}>.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9906Verification of Timed Asynchronous Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9907
In this paper, we address the verification problem for timed asynchronous programs. We associate to each task, a deadline for its execution. We first show that the control state reachability problem for such class of systems is decidable while the configuration reachability problem is undecidable. Then, we consider the subclass of timed asynchronous programs where tasks are always being executed from the same state. For this subclass, we show that the control state reachability problem is PSPACE-complete. Furthermore, we show the decidability for the configuration reachability problem for the subclass.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9907The Cayley-Graph of the Queue Monoid: Logic and Decidability
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9908
We investigate the decidability of logical aspects of graphs that arise as Cayley-graphs of the so-called queue monoids. These monoids model the behavior of the classical (reliable) fifo-queues. We answer a question raised by Huschenbett, Kuske, and Zetzsche and prove the decidability of the first-order theory of these graphs with the help of an - at least for the authors - new combination of the well-known method from Ferrante and Rackoff and an automata-based approach. On the other hand, we prove that the monadic second-order of the queue monoid's Cayley-graph is undecidable.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9908Uniformly Automatic Classes of Finite Structures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9909
We investigate the recently introduced concept of uniformly tree-automatic classes in the realm of parameterized complexity theory. Roughly speaking, a class of finite structures is uniformly tree-automatic if it can be presented by a set of finite trees and a tuple of automata. A tree t encodes a structure and an element of this structure is encoded by a labeling of t. The automata are used to present the relations of the structure. We use this formalism to obtain algorithmic meta-theorems for first-order logic and in some cases also monadic second-order logic on classes of finite Boolean algebras, finite groups, and graphs of bounded tree-depth. Our main concern is the efficiency of this approach with respect to the hidden parameter dependence (size of the formula). We develop a method to analyze the complexity of uniformly tree-automatic presentations, which allows us to give upper bounds for the runtime of the automata-based model checking algorithm on the presented class. It turns out that the parameter dependence is elementary for all the above mentioned classes. Additionally we show that one can lift the FPT results, which are obtained by our method, from a class C to the closure of C under direct products with only a singly exponential blow-up in the parameter dependence.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9909Towards a General Direct Product Testing Theorem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9910
The Direct Product encoding of a string a in {0,1}^n on an underlying domain V subseteq ([n] choose k), is a function DP_V(a) which gets as input a set S in V and outputs a restricted to S. In the Direct Product Testing Problem, we are given a function F:V -> {0,1}^k, and our goal is to test whether F is close to a direct product encoding, i.e., whether there exists some a in {0,1}^n such that on most sets S, we have F(S)=DP_V(a)(S). A natural test is as follows: select a pair (S,S')in V according to some underlying distribution over V x V, query F on this pair, and check for consistency on their intersection. Note that the above distribution may be viewed as a weighted graph over the vertex set V and is referred to as a test graph.The testability of direct products was studied over various domains and test graphs: Dinur and Steurer (CCC '14) analyzed it when V equals the k-th slice of the Boolean hypercube and the test graph is a member of the Johnson graph family. Dinur and Kaufman (FOCS '17) analyzed it for the case where V is the set of faces of a Ramanujan complex, where in this case V=O_k(n). In this paper, we study the testability of direct products in a general setting, addressing the question: what properties of the domain and the test graph allow one to prove a direct product testing theorem?Towards this goal we introduce the notion of coordinate expansion of a test graph. Roughly speaking a test graph is a coordinate expander if it has global and local expansion, and has certain nice intersection properties on sampling. We show that whenever the test graph has coordinate expansion then it admits a direct product testing theorem. Additionally, for every k and n we provide a direct product domain V subseteq (n choose k) of size n, called the Sliding Window domain for which we prove direct product testability.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9910Space Complexity of Two Adaptive Bitprobe Schemes Storing Three Elements
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9911
We consider the following set membership problem in the bitprobe model - that of storing subsets of size at most three from a universe of size m, and answering membership queries using two adaptive bitprobes. Baig and Kesh [Mirza Galib Anwarul Husain Baig and Deepanjan Kesh, 2018] proposed a scheme for the problem which takes O(m^{2/3}) space. In this paper, we present a proof which shows that any scheme for the problem requires Omega(m^{2/3}) amount of space. These two results together settle the space complexity issue for this particular problem.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9911New Constructions with Quadratic Separation between Sensitivity and Block Sensitivity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9912
Nisan and Szegedy [Nisan and Szegedy, 1994] conjectured that block sensitivity is at most polynomial in sensitivity for any Boolean function. There is a huge gap between the best known upper bound on block sensitivity in terms of sensitivity - which is exponential, and the best known separating examples - which give only a quadratic separation between block sensitivity and sensitivity.In this paper we give various new constructions of families of Boolean functions that exhibit quadratic separation between sensitivity and block sensitivity. Our constructions have several novel aspects. For example, we give the first direct constructions of families of Boolean functions that have both 0-block sensitivity and 1-block sensitivity quadratically larger than sensitivity.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9912Lambda-Definable Order-3 Tree Functions are Well-Quasi-Ordered
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9913
Asada and Kobayashi [ICALP 2017] conjectured a higher-order version of Kruskal's tree theorem, and proved a pumping lemma for higher-order languages modulo the conjecture. The conjecture has been proved up to order-2, which implies that Asada and Kobayashi's pumping lemma holds for order-2 tree languages, but remains open for order-3 or higher. In this paper, we prove a variation of the conjecture for order-3. This is sufficient for proving that a variation of the pumping lemma holds for order-3 tree languages (equivalently, for order-4 word languages).Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9913A Hypersequent Calculus with Clusters for Tense Logic over Ordinals
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9914
Prior's tense logic forms the core of linear temporal logic, with both past- and future-looking modalities. We present a sound and complete proof system for tense logic over ordinals. Technically, this is a hypersequent system, enriched with an ordering, clusters, and annotations. The system is designed with proof search algorithms in mind, and yields an optimal coNP complexity for the validity problem. It entails a small model property for tense logic over ordinals: every satisfiable formula has a model of order type at most omega^2. It also allows to answer the validity problem for ordinals below or exactly equal to a given one.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9914
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9915
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9915Popular Matchings in Complete Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9916
Our input is a complete graph G = (V,E) on n vertices where each vertex has a strict ranking of all other vertices in G. The goal is to construct a matching in G that is "globally stable" or popular. A matching M is popular if M does not lose a head-to-head election against any matching M': here each vertex casts a vote for the matching in {M,M'} where it gets a better assignment. Popular matchings need not exist in the given instance G and the popular matching problem is to decide whether one exists or not. The popular matching problem in G is easy to solve for odd n. Surprisingly, the problem becomes NP-hard for even n, as we show here.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9916Graph Pattern Polynomials
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9917
Given a host graph G and a pattern graph H, the induced subgraph isomorphism problem is to decide whether G contains an induced subgraph that is isomorphic to H. We study the time complexity of induced subgraph isomorphism problems when the pattern graph is fixed. Nesetril and Poljak gave an O(n^{k omega}) time algorithm that decides the induced subgraph isomorphism problem for any 3k vertex pattern graph (the universal algorithm), where omega is the matrix multiplication exponent. Improvements are not known for any infinite pattern family.Algorithms faster than the universal algorithm are known only for a finite number of pattern graphs. In this paper, we show that there exists infinitely many pattern graphs for which the induced subgraph isomorphism problem has algorithms faster than the universal algorithm.Our algorithm works by reducing the pattern detection problem into a multilinear term detection problem on special classes of polynomials called graph pattern polynomials. We show that many of the existing algorithms including the universal algorithm can also be described in terms of such a reduction. We formalize this class of algorithms by defining graph pattern polynomial families and defining a notion of reduction between these polynomial families. The reduction also allows us to argue about relative hardness of various graph pattern detection problems within this framework. We show that solving the induced subgraph isomorphism for any pattern graph that contains a k-clique is at least as hard detecting k-cliques. An equivalent theorem is not known in the general case.In the full version of this paper, we obtain new algorithms for P_5 and C_5 that are optimal under reasonable hardness assumptions. We also use this method to derive new combinatorial algorithms - algorithms that do not use fast matrix multiplication - for paths and cycles. We also show why graph homomorphisms play a major role in algorithms for subgraph isomorphism problems. Using this, we show that the arithmetic circuit complexity of the graph homomorphism polynomial for K_k - e (The k-clique with an edge removed) is related to the complexity of many subgraph isomorphism problems. This generalizes and unifies many existing results.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9917Shortest k-Disjoint Paths via Determinants
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9918
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9918Hyper Partial Order Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9919
We define HyPOL, a local hyper logic for partial order models, expressing properties of sets of runs. These properties depict shapes of causal dependencies in sets of partially ordered executions, with similarity relations defined as isomorphisms of past observations. Unsurprisingly, since comparison of projections are included, satisfiability of this logic is undecidable. We then address model checking of HyPOL and show that, already for safe Petri nets, the problem is undecidable. Fortunately, sensible restrictions of observations and nets allow us to bring back model checking of HyPOL to a decidable problem, namely model checking of MSO on graphs of bounded treewidth.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9919On the Way to Alternating Weak Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9920
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9920Origin-Equivalence of Two-Way Word Transducers Is in PSPACE
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9921
We consider equivalence and containment problems for word transductions. These problems are known to be undecidable when the transductions are relations between words realized by non-deterministic transducers, and become decidable when restricting to functions from words to words. Here we prove that decidability can be equally recovered the origin semantics, that was introduced by Bojanczyk in 2014. We prove that the equivalence and containment problems for two-way word transducers in the origin semantics are PSPACE-complete. We also consider a variant of the containment problem where two-way transducers are compared under the origin semantics, but in a more relaxed way, by allowing distortions of the origins. The possible distortions are described by means of a resynchronization relation. We propose MSO-definable resynchronizers and show that they preserve the decidability of the containment problem under resynchronizations.{}Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9921Constant Factor Approximation Algorithm for Uniform Hard Capacitated Knapsack Median Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9922
In this paper, we give the first constant factor approximation algorithm for capacitated knapsack median problem (CKnM) for hard uniform capacities, violating the budget by a factor of 1+epsilon and capacities by a 2+epsilon factor. To the best of our knowledge, no constant factor approximation is known for the problem even with capacity/budget/both violations. Even for the uncapacitated variant of the problem, the natural LP is known to have an unbounded integrality gap even after adding the covering inequalities to strengthen the LP. Our techniques for CKnM provide two types of results for the capacitated k-facility location problem. We present an O(1/epsilon^2) factor approximation for the problem, violating capacities by (2+epsilon). Another result is an O(1/epsilon) factor approximation, violating the capacities by a factor of at most (1 + epsilon) using at most 2k facilities for a fixed epsilon>0. As a by-product, a constant factor approximation algorithm for capacitated facility location problem with uniform capacities is presented, violating the capacities by (1 + epsilon) factor. Though constant factor results are known for the problem without violating the capacities, the result is interesting as it is obtained by rounding the solution to the natural LP, which is known to have an unbounded integrality gap without violating the capacities. Thus, we achieve the best possible from the natural LP for the problem. The result shows that the natural LP is not too bad.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9922A 5-Approximation for Universal Facility Location
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9923
In this paper, we propose and analyze a local search algorithm for the Universal facility location problem. Our algorithm improves the approximation ratio of this problem from 5.83, given by Angel et al., to 5. A second major contribution of the paper is that it gets rid of the expensive multi operation that was a mainstay of all previous local search algorithms for capacitated facility location and universal facility location problem. The only operations that we require to prove the 5-approximation are add, open, and close. A multi operation is basically a combination of the open and close operations. The 5-approximation algorithm for the capacitated facility location problem, given by Bansal et al., also uses the multi operation. However, on careful observation, it turned out that add, open, and close operations are sufficient to prove a 5-factor for the problem. This resulted into an improved algorithm for the universal facility location problem, with an improved factor.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9923On Fair Division for Indivisible Items
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9924
We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to maximize the geometric mean of the utilities of the agents. Each good comes in multiple items or copies, and the utility of an agent diminishes as it receives more items of the same good. The utility of a bundle of items for an agent is the sum of the utilities of the items in the bundle. Each agent has a utility cap beyond which he does not value additional items. We give a polynomial time approximation algorithm that maximizes Nash social welfare up to a factor of e^{1/{e}} ~~ 1.445. The computed allocation is Pareto-optimal and approximates envy-freeness up to one item up to a factor of 2 + epsilon.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9924Combinatorial Algorithms for General Linear Arrow-Debreu Markets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9925
We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an O~((n+m)^7 log^3(UW)) algorithm where n, m, U and W refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is U^{Omega(n)} and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9925On the Welfare of Cardinal Voting Mechanisms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9926
A voting mechanism is a method for preference aggregation that takes as input preferences over alternatives from voters, and selects an alternative, or a distribution over alternatives. While preferences of voters are generally assumed to be cardinal utility functions that map each alternative to a real value, mechanisms typically studied assume coarser inputs, such as rankings of the alternatives (called ordinal mechanisms). We study cardinal mechanisms, that take as input the cardinal utilities of the voters, with the objective of minimizing the distortion - the worst-case ratio of the best social welfare to that obtained by the mechanism.For truthful cardinal mechanisms with m alternatives and n voters, we show bounds of Theta(mn), Omega(m), and Omega(sqrt{m}) for deterministic, unanimous, and randomized mechanisms respectively. This shows, somewhat surprisingly, that even mechanisms that allow cardinal inputs have large distortion. There exist ordinal (and hence, cardinal) mechanisms with distortion O(sqrt{m log m}), and hence our lower bound for randomized mechanisms is nearly tight. In an effort to close this gap, we give a class of truthful cardinal mechanisms that we call randomized hyperspherical mechanisms that have O(sqrt{m log m}) distortion. These are interesting because they violate two properties - localization and non-perversity - that characterize truthful ordinal mechanisms, demonstrating non-trivial mechanisms that differ significantly from ordinal mechanisms. Given the strong lower bounds for truthful mechanisms, we then consider approximately truthful mechanisms. We give a mechanism that is delta-truthful given delta in (0,1), and has distortion close to 1. Finally, we consider the simple mechanism that selects the alternative that maximizes social welfare. This mechanism is not truthful, and we study the distortion at equilibria for the voters (equivalent to the Price of Anarchy, or PoA). While in general, the PoA is unbounded, we show that for equilibria obtained from natural dynamics, the PoA is close to 1. Thus relaxing the notion of truthfulness in both cases allows us to obtain near-optimal distortion.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9926Symbolic Approximation of Weighted Timed Games
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9927
Weighted timed games are zero-sum games played by two players on a timed automaton equipped with weights, where one player wants to minimise the accumulated weight while reaching a target. Weighted timed games are notoriously difficult and quickly undecidable, even when restricted to non-negative weights. For non-negative weights, the largest class that can be analysed has been introduced by Bouyer, Jaziri and Markey in 2015. Though the value problem is undecidable, the authors show how to approximate the value by considering regions with a refined granularity. In this work, we extend this class to incorporate negative weights, allowing one to model energy for instance, and prove that the value can still be approximated, with the same complexity. In addition, we show that a symbolic algorithm, relying on the paradigm of value iteration, can be used as an approximation schema on this class.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9927A Symbolic Framework to Analyse Physical Proximity in Security Protocols
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9928
For many modern applications like e.g., contactless payment, and keyless systems, ensuring physical proximity is a security goal of paramount importance. Formal methods have proved their usefulness when analysing standard security protocols. However, existing results and tools do not apply to e.g., distance bounding protocols that aims to ensure physical proximity between two entities. This is due in particular to the fact that existing models do not represent in a faithful way the locations of the participants, and the fact that transmission of messages takes time.In this paper, we propose several reduction results: when looking for an attack, it is actually sufficient to consider a simple scenario involving at most four participants located at some specific locations. These reduction results allow one to use verification tools (e.g. ProVerif, Tamarin) developed for analysing more classical security properties. As an application, we analyse several distance bounding protocols, as well as a contactless payment protocol.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9928On Canonical Models for Rational Functions over Infinite Words
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9929
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9929Reachability for Two-Counter Machines with One Test and One Reset
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9930
We prove that the reachability relation of two-counter machines with one zero-test and one reset is Presburger-definable and effectively computable. Our proof is based on the introduction of two classes of Presburger-definable relations effectively stable by transitive closure. This approach generalizes and simplifies the existing different proofs and it solves an open problem introduced by Finkel and Sutre in 2000.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9930The Parikh Property for Weighted Context-Free Grammars
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9931
Parikh's Theorem states that every context-free grammar (CFG) is equivalent to some regular CFG when the ordering of symbols in the words is ignored. The same is not true for the so-called weighted CFGs, which additionally assign a weight to each grammar rule. If the result holds for a given weighted CFG G, we say that G satisfies the Parikh property. We prove constructively that the Parikh property holds for every weighted nonexpansive CFG. We also give a decision procedure for the property when the weights are over the rationals.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9931Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9932
We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph G, a distance bound L, and p pairs of vertices (s_1,t_1),...,(s_p,t_p), the objective is to find a minimum-cost subgraph G' such that s_i and t_i have distance at most L in G' (for every i in [p]). Our main result is on the fixed-parameter tractability of this problem for parameter p. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W[1]-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W[1]-hard even to approximate.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9932On the Parameterized Complexity of [1,j]-Domination Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9933
For a graph G, a set D subseteq V(G) is called a [1,j]-dominating set if every vertex in V(G) setminus D has at least one and at most j neighbors in D. A set D subseteq V(G) is called a [1,j]-total dominating set if every vertex in V(G) has at least one and at most j neighbors in D. In the [1,j]-(Total) Dominating Set problem we are given a graph G and a positive integer k. The objective is to test whether there exists a [1,j]-(total) dominating set of size at most k. The [1,j]-Dominating Set problem is known to be NP-complete, even for restricted classes of graphs such as chordal and planar graphs, but polynomial-time solvable on split graphs. The [1,2]-Total Dominating Set problem is known to be NP-complete, even for bipartite graphs. As both problems generalize the Dominating Set problem, both are W[1]-hard when parameterized by solution size. In this work, we study [1,j]-Dominating Set on sparse graph classes from the perspective of parameterized complexity and prove the following results when the problem is parameterized by solution size:- [1,j]-Dominating Set is W[1]-hard on d-degenerate graphs for d = j + 1; - [1,j]-Dominating Set is FPT on nowhere dense graphs. We also prove that the known algorithm for [1,j]-Dominating Set on split graphs is optimal under the Strong Exponential Time Hypothesis (SETH). Finally, assuming SETH, we provide a lower bound for the running time of any algorithm solving the [1,2]-Total Dominating Set problem parameterized by pathwidth.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9933Sub-Exponential Time Parameterized Algorithms for Graph Layout Problems on Digraphs with Bounded Independence Number
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9934
Fradkin and Seymour [Journal of Combinatorial Graph Theory, Series B, 2015] defined the class of digraphs of bounded independence number as a generalization of the class of tournaments. They argued that the class of digraphs of bounded independence number is structured enough to be exploited algorithmically. In this paper, we further strengthen this belief by showing that several cut problems that admit sub-exponential time parameterized algorithms (a trait uncommon to parameterized algorithms) on tournaments, including Directed Feedback Arc Set, Directed Cutwidth and Optimal Linear Arrangement, also admit such algorithms on digraphs of bounded independence number. Towards this, we rely on the generic approach of Fomin and Pilipczuk [ESA, 2013], where to get the desired algorithms, it is enough to bound the number of k-cuts in digraphs of bounded independence number by a sub-exponential FPT function (Fomin and Pilipczuk bounded the number of k-cuts in transitive tournaments). Specifically, our main technical contribution is that the yes-instances of the problems above have a sub-exponential number of k-cuts. We prove this bound by using a combination of chromatic coding, an inductive argument and structural properties of the digraphs.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9934Safe and Optimal Scheduling for Hard and Soft Tasks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9935
We consider a stochastic scheduling problem with both hard and soft tasks on a single machine. Each task is described by a discrete probability distribution over possible execution times, and possible inter-arrival times of the job, and a fixed deadline. Soft tasks also carry a penalty cost to be paid when they miss a deadline. We ask to compute an online and non-clairvoyant scheduler (i.e. one that must take decisions without knowing the future evolution of the system) that is safe and efficient. Safety imposes that deadline of hard tasks are never violated while efficient means that we want to minimise the mean cost of missing deadlines by soft tasks.First, we show that the dynamics of such a system can be modelled as a finite Markov Decision Process (MDP). Second, we show that our scheduling problem is PP-hard and in EXPTime. Third, we report on a prototype tool that solves our scheduling problem by relying on the Storm tool to analyse the corresponding MDP. We show how antichain techniques can be used as a potential heuristic.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9935The Delta-Framework
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9936
We introduce the Delta-framework, LF_Delta, a dependent type theory based on the Edinburgh Logical Framework LF, extended with the strong proof-functional connectives, i.e. strong intersection, minimal relevant implication and strong union. Strong proof-functional connectives take into account the shape of logical proofs, thus reflecting polymorphic features of proofs in formulae. This is in contrast to classical or intuitionistic connectives where the meaning of a compound formula depends only on the truth value or the provability of its subformulae. Our framework encompasses a wide range of type disciplines. Moreover, since relevant implication permits to express subtyping, LF_Delta subsumes also Pfenning's refinement types. We discuss the design decisions which have led us to the formulation of LF_Delta, study its metatheory, and provide various examples of applications. Our strong proof-functional type theory can be plugged in existing common proof assistants.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9936Extending Finite-Memory Determinacy by Boolean Combination of Winning Conditions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9937
We study finite-memory (FM) determinacy in games on finite graphs, a central question for applications in controller synthesis, as FM strategies correspond to implementable controllers. We establish general conditions under which FM strategies suffice to play optimally, even in a broad multi-objective setting. We show that our framework encompasses important classes of games from the literature, and permits to go further, using a unified approach. While such an approach cannot match ad-hoc proofs with regard to tightness of memory bounds, it has two advantages: first, it gives a widely-applicable criterion for FM determinacy; second, it helps to understand the cornerstones of FM determinacy, which are often hidden but common in proofs for specific (combinations of) winning conditions.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9937Deterministic Algorithms for Maximum Matching on General Graphs in the Semi-Streaming Model
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9938
We present an improved deterministic algorithm for Maximum Cardinality Matching on general graphs in the Semi-Streaming Model. In the Semi-Streaming Model, a graph is presented as a sequence of edges, and an algorithm must access the edges in the given sequence. It can only use O(n polylog n) space to perform computations, where n is the number of vertices of the graph. If the algorithm goes over the stream k times, it is called a k-pass algorithm. In this model, McGregor [McGregor, 2005] gave the currently best known randomized (1+epsilon)-approximation algorithm for maximum cardinality matching on general graphs, that uses (1/epsilon)^{O(1/epsilon)} passes. Ahn and Guha [Ahn and Guha, 2013] later gave the currently best known deterministic (1+epsilon)-approximation algorithms for maximum cardinality matching: one on bipartite graphs that uses O(log log(1/epsilon)/epsilon^2) passes, and the other on general graphs that uses O(log n *poly(1/epsilon)) passes (note that, for general graphs, the number of passes is dependent on the size of the input). We present the first deterministic algorithm that achieves a (1+epsilon)-approximation on general graphs in only a constant number ((1/epsilon)^{O(1/epsilon)}) of passes.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9938Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9939
We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size |Sigma|. For the problem of deciding whether the LCS of strings x,y has length at least L, we obtain a sketch size and streaming space usage of O(L^{|Sigma| - 1} log L). We also prove matching unconditional lower bounds.As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an O(min{nm, n + m^{|Sigma|}})-time algorithm for this problem, on strings x,y of length n,m, with n >= m. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9939On the Inner Product Predicate and a Generalization of Matching Vector Families
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9940
Motivated by cryptographic applications such as predicate encryption, we consider the problem of representing an arbitrary predicate as the inner product predicate on two vectors. Concretely, fix a Boolean function P and some modulus q. We are interested in encoding x to x_vector and y to y_vector so that P(x,y) = 1 <=> <x_vector,y_vector> = 0 mod q, where the vectors should be as short as possible. This problem can also be viewed as a generalization of matching vector families, which corresponds to the equality predicate. Matching vector families have been used in the constructions of Ramsey graphs, private information retrieval (PIR) protocols, and more recently, secret sharing.Our main result is a simple lower bound that allows us to show that known encodings for many predicates considered in the cryptographic literature such as greater than and threshold are essentially optimal for prime modulus q. Using this approach, we also prove lower bounds on encodings for composite q, and then show tight upper bounds for such predicates as greater than, index and disjointness.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9940Extending Propositional Separation Logic for Robustness Properties
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9941
We study an extension of propositional separation logic that can specify robustness properties, such as acyclicity and garbage freedom, for automatic verification of stateful programs with singly-linked lists. We show that its satisfiability problem is PSpace-complete, whereas modest extensions of the logic are shown to be Tower-hard. As separating implication, reachability predicates (under some syntactical restrictions) and a unique quantified variable are allowed, this logic subsumes several PSpace-complete separation logics considered in previous works.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9941Bundled Fragments of First-Order Modal Logic: (Un)Decidability
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9942
Quantified modal logic is notorious for being undecidable, with very few known decidable fragments such as the monodic ones. For instance, even the two-variable fragment over unary predicates is undecidable. In this paper, we study a particular fragment, namely the bundled fragment, where a first-order quantifier is always followed by a modality when occurring in the formula, inspired by the proposal of [Yanjing Wang, 2017] in the context of non-standard epistemic logics of know-what, know-how, know-why, and so on.As always with quantified modal logics, it makes a significant difference whether the domain stays the same across possible worlds. In particular, we show that the predicate logic with the bundle "forall Box" alone is undecidable over constant domain interpretations, even with only monadic predicates, whereas having the "exists Box" bundle instead gives us a decidable logic. On the other hand, over increasing domain interpretations, we get decidability with both "forall Box" and "exists Box" bundles with unrestricted predicates, where we obtain tableau based procedures that run in PSPACE. We further show that the "exists Box" bundle cannot distinguish between constant domain and variable domain interpretations.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9942On the Boundedness Problem for Higher-Order Pushdown Vector Addition Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9943
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9943Stronger Tradeoffs for Orthogonal Range Querying in the Semigroup Model
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9944
In this paper, we focus on lower bounds for data structures supporting orthogonal range querying on m points in n-dimensions in the semigroup model. Such a data structure usually maintains a family of "canonical subsets" of the given set of points and on a range query, it outputs a disjoint union of the appropriate subsets. Fredman showed that in order to prove lower bounds in the semigroup model, it suffices to prove a lower bound on a certain combinatorial tradeoff between two parameters: (a) the total sizes of the canonical subsets, and (b) the total number of canonical subsets required to cover all query ranges. In particular, he showed that the arithmetic mean of these two parameters is Omega(m log^n m). We strengthen this tradeoff by showing that the geometric mean of the same two parameters is Omega(m log^n m). Our second result is an alternate proof of Fredman's tradeoff in the one dimensional setting. The problem of answering range queries using canonical subsets can be formulated as factoring a specific boolean matrix as a product of two boolean matrices, one representing the canonical sets and the other capturing the appropriate disjoint unions of the former to output all possible range queries. In this formulation, we can ask what is an optimal data structure, i.e., a data structure that minimizes the sum of the two parameters mentioned above, and how does the balanced binary search tree compare with this optimal data structure in the two parameters? The problem of finding an optimal data structure is a non-linear optimization problem. In one dimension, Fredman's result implies that the minimum value of the objective function is Omega(m log m), which means that at least one of the parameters has to be Omega(m log m). We show that both the parameters in an optimal solution have to be Omega(m log m). This implies that balanced binary search trees are near optimal data structures for range querying in one dimension. We derive intermediate results on factoring matrices, not necessarily boolean, while trying to minimize the norms of the factors, that may be of independent interest.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9944Parameterized Dynamic Cluster Editing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9945
We introduce a dynamic version of the NP-hard Cluster Editing problem. The essential point here is to take into account dynamically evolving input graphs: Having a cluster graph (that is, a disjoint union of cliques) that represents a solution for a first input graph, can we cost-efficiently transform it into a "similar" cluster graph that is a solution for a second ("subsequent") input graph? This model is motivated by several application scenarios, including incremental clustering, the search for compromise clusterings, or also local search in graph-based data clustering. We thoroughly study six problem variants (edge editing, edge deletion, edge insertion; each combined with two distance measures between cluster graphs). We obtain both fixed-parameter tractability as well as parameterized hardness results, thus (except for two open questions) providing a fairly complete picture of the parameterized computational complexity landscape under the perhaps two most natural parameterizations: the distance of the new "similar" cluster graph to (i) the second input graph and to (ii) the input cluster graph.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9945The Complexity of Separation for Levels in Concatenation Hierarchies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9946
Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9946Reducing Transducer Equivalence to Register Automata Problems Solved by "Hilbert Method"
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9947
In the past decades, classical results from algebra, including Hilbert's Basis Theorem, had various applications in formal languages, including a proof of the Ehrenfeucht Conjecture, decidability of HDT0L sequence equivalence, and decidability of the equivalence problem for functional tree-to-string transducers.In this paper, we study the scope of the algebraic methods mentioned above, particularily as applied to the functionality problem for register automata, and equivalence for functional register automata. We provide two results, one positive, one negative. The positive result is that functionality and equivalence are decidable for MSO transformations on unordered forests. The negative result comes from a try to extend this method to decide functionality and equivalence on macro tree transducers. We reduce macro tree transducers equivalence to an equivalence problem for some class of register automata naturally relevant to our method. We then prove this latter problem to be undecidable.Fri, 23 Nov 2018 13:01:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9947From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences (Dagstuhl Perspectives Workshop 17442)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9898
We describe the state-of-the-art in performance modeling and prediction for Information Retrieval (IR), Natural Language Processing (NLP) and Recommender Systems (RecSys) along with its shortcomings and strengths. We present a framework for further research, identifying five major problem areas: understanding measures, performance analysis, making underlying assumptions explicit, identifying application features determining performance, and the development of prediction models describing the relationship between assumptions, features and resulting performance. Wed, 21 Nov 2018 08:18:59 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9898OASIcs, Volume 64, ICLP'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9897
OASIcs, Volume 64, ICLP'18, Complete VolumeTue, 20 Nov 2018 09:54:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9897On-Body Interaction: Embodied Cognition Meets Sensor/Actuator Engineering to Design New Interfaces (Dagstuhl Seminar 18212)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9894
On-body technologies are emerging as a new paradigm in human-computer interaction. Instead of moving a mouse or tapping a touch surface, people can use whole-body movements to navigate in games, gesture in mid-air to interact with large displays, or touch their forearm to control a mobile phone. First promising applications are being investigated or have been demonstrated in mobile computing, healthcare, or sports. Two areas of research have been contributing to this paradigm. Research on embodied cognition suggests that the body should no longer be treated as a passive actuator of input devices but as something that needs to be carefully designed for and as something that offers unique new possibilities in interaction. Embodied cognition has become a prominent candidate for outlining what we can and cannot do in on-body interaction. Research on interactive technologies for the body is opening up new avenues for human-computer interaction, by contributing body-based sensing input and output modalities with more body compatible form factors. Together, these areas allow the design and implementation of new user interfaces; however, they are rarely in direct contact with each other. The intended outcome of the seminar was a research agenda for on-body technologies based on synergies between these two views. We therefore brought together a group of researchers from embodied cognition (including psychology, robotics, human-computer interaction, and sociology) as well as sensor/actuator engineering (including computer science, materials science, electrical engineering). These groups worked together toward outlining a research agenda for on-body technologies, in part using a bottom-up process at the seminar, in part using structured answers to questions in advance of the seminar. Key topics for discussion included (1) advances in on-body sensors and actuators, in particular how to drive the technical development from work on embodied cognition and the body, (2) cognitive consequences of on-body technologies, (3) how to take the peculiarities and possibilities of the body into consideration, (4) how to evaluate on-body technology, and (5) application areas of on-body technologies. Mon, 19 Nov 2018 09:28:38 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9894Formal Methods and Fault-Tolerant Distributed Comp.: Forging an Alliance (Dagstuhl Seminar 18211)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9893
The Dagstuhl Seminar "Formal Methods and Fault-TolerantDistributed Computing: Forging an Alliance" took place May 22-25,2018. Its goal was to strengthen the interaction between researchersfrom formal methods and from distributed computing, and help the twocommunities to better identify common research challenges.Mon, 19 Nov 2018 09:27:57 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9893Inter-Vehicular Communication Towards Cooperative Driving (Dagstuhl Seminar 18202)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9892
Looking back at the last decade, one can observe enormous progress in the domain of vehicular networking. In this growing community, many ongoing activities focus on the design of communication protocols to support safety applications, intelligent navigation, multi-player gaming and others.This seminar shifted the focus from basic networking principles to networked control applications. We were particularly interested in eSafety applications and traffic efficiency applications that are thought to yield substantial benefits for the emerging "cooperative automated driving" domain.The seminar brought together experts from several fields, including classical computer science (computer networking, simulation and modeling, operating system design), electrical engineering (digital signal processing, communication networks), and automated driving (mechanical engineering, image processing, control theory), to discuss the most challenging issues related to inter-vehicular communication and cooperative driving.Mon, 19 Nov 2018 09:27:22 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9892Secure Compilation (Dagstuhl Seminar 18201)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9891
Secure compilation is an emerging field that puts together advances insecurity, programming languages, verification, systems, and hardwarearchitectures in order to devise secure compilation chains thateliminate many of today's vulnerabilities.Secure compilation aims to protect a source language's abstractions incompiled code, even against low-level attacks.For a concrete example, all modern languages provide a notion ofstructured control flow and an invoked procedure is expected to returnto the right place.However, today's compilation chains (compilers, linkers, loaders,runtime systems, hardware) cannot efficiently enforce thisabstraction: linked low-level code can call and return to arbitraryinstructions or smash the stack, blatantly violating the high-levelabstraction.The emerging secure compilation community aims to address suchproblems by devising formal security criteria, efficient enforcementmechanisms, and effective proof techniques.This seminar strived to take a broad and inclusive view of securecompilation and to provide a forum for discussion on the topic. Thegoal was to identify interesting research directions and openchallenges by bringing together people working on building securecompilation chains, on developing proof techniques and verificationtools, and on designing security mechanisms.Mon, 19 Nov 2018 09:26:53 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9891Present and Future of Formal Argumentation (Dagstuhl Perspectives Workshop 15362)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9895
Formal Argumentation is emerging as a key reasoning paradigm building bridges among knowledge representation and reasoning in artificial intelligence, informal argumentation in philosophy and linguistics, legal and ethical argumentation, mathematical and logical reasoning, and graph-theoretic reasoning. It aims to capture diverse kinds of reasoning and dialogue activities in the presence of uncertainty and conflicting information in a formal and intuitive way, with potential applications ranging from argumentation mining, via LegalTech and machine ethics, to therapy in clinical psychology. The turning point for the modern stage of formal argumentation theory, much similar to the introduction of possible worlds semantics for the theory of modality, is the framework and language of Dung's abstract argumentation theory introduced in 1995. This means that nothing could remain the same as before 1995 - it should be a focal point of reference for any study of argumentation, even if it is critical about it. Now, in modal logic, the introduction of the possible worlds semantics has led to a complete paradigm shift, both in tools and new subjects of studies. This is still not fully true for what is going on in argumentation theory. The Dagstuhl workshop led to the first volume of a handbook series in formal argumentation, reflecting the new stage of the development of argumentation theory.Fri, 16 Nov 2018 09:53:39 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9895