Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082010-06-08http://www.dagstuhl.de/rss.phpLIPIcs, Volume 69, TYPES'15, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8622
LIPIcs, Volume 69, TYPES'15, Complete VolumeFri, 16 Mar 2018 08:41:51 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8622Body-Centric Computing (Dagstuhl Reports 17392)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8592
The rise of technology that can support the active human body – in contrast to the previously prevalent paradigm of interacting with computers while sitting still – such as wearables, quantified self systems and mobile computing highlights an opportunity for a new era of "body-centric computing". However, most work in this area has taken quite an instrumental perspective, focusing on achieving extrinsic performance objectives. Phenomenology, however, highlights that it is also important to support the experiential perspective of living an active life, that is, technology should also help people focus on their lived experiences and personal growth to deepen their understanding and engagement with their own bodies.We find that despite the work on embodiment, the use of technology to support the corporeal, pulsating and felt body has been notably absent. We believe the reason for this is due to limited knowledge about how to understand, analyse and correlate the vast amount of data from the various sensors worn by individuals and populations in real-time so that we can present it in a way that it supports people's felt experience. In order to drive such an agenda that supports both instrumental and experiential perspectives of the active human body, this seminar brings together leading experts, including those who are central to the development of products and ideas relating to wearables, mobile computing, quantified self, data analysis and visualization, exertion games and computer sports science. The goal is to address key questions around the use of sensor data to support both instrumental and experiential perspectives of the active human body and to jump-start collaborations between people from different backgrounds to pioneer new approaches for a body-centric computing future.Fri, 16 Mar 2018 07:56:18 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8592LIPIcs, Volume 73, TQC'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8620
LIPIcs, Volume 73, TQC'17, Complete VolumeWed, 14 Mar 2018 09:32:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8620LIPIcs, Volume 89, IPEC'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8616
LIPIcs, Volume 89, IPEC'17, Complete VolumeTue, 13 Mar 2018 13:56:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8616LIPIcs, Volume 96, STACS'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8617
LIPIcs, Volume 96, STACS'18, Complete VolumeTue, 13 Mar 2018 13:53:58 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8617Deep Learning for Computer Vision (Dagstuhl Seminar 17391)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8591
The field of computer vision engages in the goal to enable and enhance a machine’s ability to infer knowledge and information from spatial and visual input data. Recent advances in data-driven learning approaches, accelerated by increasing parallel computing power and a ubiquitous availability of large amounts of data, pushed the boundaries of almost every vision related subdomain. The most prominent example of these machine learning approaches is a so called deep neural network (DNN), which works as a general function approximator and can be trained to learn a mapping between given input and target output data. Research on and with these DNN is generally referred to as Deep Learning. Despite its high dimensional and complex input space, research in the field of computer vision was and still is one of the main driving forces for new development in machine and deep learning, and vice versa. This seminar aims to discuss recent works on theoretical and practical advances in the field of deep learning itself as well as state-of-the-art results achieved by applying learning based approaches to various vision problems. Our diverse spectrum of topics includes theoretical and mathematical insights focusing on a better understanding of the fundamental concepts behind deep learning and a multitude of specific research topics facilitating an exchange of knowledge between peers of the research community.Wed, 07 Mar 2018 14:52:02 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8591Approaches and Applications of Inductive Programming (Dagstuhl Seminar 17382)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8590
This report documents the program and the outcomes of Dagstuhl Seminar 17382 "Approaches and Applications of Inductive Programming". After a short introduction to the state of the art toinductive programming research, an overview of the introductory tutorials, the talks, program demonstrations, and the outcomes of discussion groups is given.Wed, 07 Mar 2018 14:51:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8590Recent Trends in Knowledge Compilation (Dagstuhl Seminar 17381)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8589
Knowledge compilation (KC) is a research topic which aims to investigate the possibility of circumventing the computational intractability of hard tasks, by preprocessing part of the available information, common to a number of instances. Pioneered almost three decades ago, KC is nowadays a very active research field, transversal to several areas within computer science. Among others, KC intersects knowledge representation, constraint satisfaction, algorithms, complexity theory, machine learning, and databases.The results obtained so far take various forms, from theory (compilability settings, definition of target languages for KC, complexity results, succinctness results, etc.) to more practical results (development and evaluation of compilers and other preprocessors, applications to diagnosis, planning, automatic configuration, etc.). Recently, KC has been positioned as providing a systematic method for solving problems beyond NP, and also found applications in machine learning.The goal of this Dagstuhl Seminar was to advance both aspects of KC, and to pave the way for a fruitful cross-fertilization between the topics, from theory to practice. The program included a mixture of long and short presentations, with discussions. Several long talks with a tutorial flavor introduced the participants to the variety of aspects in knowledge compilation and the diversity of techniques used. System presentations as well as an open problem session were also included in the program.Wed, 07 Mar 2018 14:50:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8589Cybersafety in Modern Online Social Networks (Dagstuhl Reports 17372)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8588
This report documents the program and the outcomes of Dagstuhl Seminar 17372 "Cybersafety in Modern Online Social Networks." The main motivation behind the seminar stems from the increased relevance of threats and challenges in the context of cybersafety, especially in modern online social networks, where the range of malicious activities perpetrated by malevolent actors is regrettably wide. These include spreading malware and spam, controlling and operating fake/compromised accounts, artificially manipulating the reputation of accounts and pages, and spreading false information as well as terrorist propaganda.The reasons for the success of such attacks are manifold. The users of social networking services tend to extend their trust of the services and profiles of their acquaintances to unknown users and other third parties: despite the service providers' attempts at keeping their audiences identifiable and accountable, creating a fake profile, also in another person's name, is very simple. Even partially or fully taking over a profile is comparatively easy, and comes with the benefit of the trust this profile has accrued over time, as many credentials are easy to acquire. Further, even seemingly innocuous issues such as the design and presentation of user interfaces can result in implications for cybersafety. The failure to understand the interfaces and ramifications of certain online actions can lead to extensive over-sharing. Even the limited information of partial profiles may be sufficient for abuse by inference on specific features only. This is especially worrisome for new or younger users of a system that might unknowingly expose information or have unwanted interactions simply due to not fully understanding the platform they are using. Unfortunately, research in cybersafety has looked at the various sub-problems in isolation, almost exclusively relying on algorithms aimed at detecting malicious accounts that act similarly, or analyzing specific lingual patterns. This ultimately yields a cat-and-mouse game, mostly played on economic grounds, whereby social network operators attempt to make it more and more costly for fraudsters to evade detection, which unfortunately tends to fail to measure and address the impact of safety threats from the point of view of regular individuals.This prompts the need for a multi-faceted, multi-disciplinary, holistic approach to advancing the state of knowledge on cybersafety in online social networks, and the ways in which it can be researched and protected. Ultimately, we want to work towards development of a cutting-edge research agenda and technical roadmap that will allow the community to develop and embed tools to detect malice within the systems themselves, and to design effective ways to enhance their safety online. This seminar was intended to bring together researchers from synergistic research communities, including experts working on information and system security on one hand, and those with expertise in human/economic/sociological factors of security on the other. More specifically, in the field of cybersafety, there exist a number of interconnected, complex issues that cannot be addressed in isolation, but have to be tackled and countered together. Moreover, it is necessary for these challenges to be studied under a multi-disciplinary light. Consequently, we identified and focused on the most relevant issues in cybersafety, and explored both current and emerging solutions. Specifically, we discussed four problems that are the most pressing both in terms of negative impact and potential danger on individuals and society, and challenging open research problems requiring a multi-disciplinary approach:Cyberbullying & Hate Speech, CyberFraud & Scams, Reputation Manipulation & Fake Activities, and Propaganda.Overall, the seminar was organized to include a number of long talks from senior experts in the field, covering the four main topics above, followed by a series of short talks from the participants about work in progress and recent results, and finally working groups to foster collaborations, brainstorming, and setting of a research agenda forward.Wed, 07 Mar 2018 14:50:22 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8588Deduction Beyond First-Order Logic (Dagstuhl Seminar 17371)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8587
This report documents the program and the outcomes of Dagstuhl Seminar 17371 "Deduction Beyond First-Order Logic." Much research in the past two decades was dedicated to automating first-order logic with equality. However, applications often need reasoning beyond this logic. This includes genuinely higher-order reasoning, reasoning in theories that are not finitely axiomatisable in first-order logic (such as those including transitive closureoperators or standard arithmetic on integers or reals), or reasoning by mathematical induction. Other practical problems need a mixture of first-order proof search and some more advanced reasoning (for instance, about higher-order formulas), or simply higher-level reasoning steps. The aim of the seminar was to bring together first-order automated reasoning experts and researchers working on deduction methods and tools that go beyond first-order logic. The seminar was dedicated to the exchange of ideas to facilitate the transition from first-order to more expressive settings.Wed, 07 Mar 2018 14:49:27 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8587Finite and Algorithmic Model Theory (Dagstuhl Seminar 17361)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8586
This report documents the program and the outcomes of Dagstuhl Seminar 17361 "Finite and Algorithmic Model Theory".Wed, 07 Mar 2018 14:48:29 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8586Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8593
Front Matter, Table of Contents, Preface, Conference OrganizationWed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8593Evaluation and Enumeration Problems for Regular Path Queries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8594
Regular path queries (RPQs) are a central component of graph databases. We investigate decision- and enumeration problems concerning the evaluation of RPQs under several semantics that have recently been considered: arbitrary paths, shortest paths, and simple paths. Whereas arbitrary and shortest paths can be enumerated in polynomial delay, the situation is much more intricate for simple paths. For instance, already the question if a given graph contains a simple path of a certain length has cases with highly non-trivial solutions and cases that are long-standing open problems. We study RPQ evaluation for simple paths from a parameterized complexity perspective and define a class of simple transitive expressions that is prominent in practice and for which we can prove a dichotomy for the evaluation problem. We observe that, even though simple path semantics is intractable for RPQs in general, it is feasible for the vast majority of RPQs that are used in practice. At the heart of our study on simple paths is a result of independent interest: the two disjoint paths problem in directed graphs is W[1]-hard if parameterized by the length of one of the two paths.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8594Rewriting Guarded Existential Rules into Small Datalog Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8595
The goal of this paper is to understand the relative expressiveness of the query language in which queries are specified by a set of guarded (disjunctive) tuple-generating dependencies (TGDs) and an output (or 'answer') predicate. Our main result is to show that every such query can be translated into a polynomially-sized (disjunctive) Datalog program if the maximal number of variables in the (disjunctive) TGDs is bounded by a constant. To overcome the challenge that Datalog has no direct means to express the existential quantification present in TGDs, we define a two-player game that characterizes the satisfaction of the dependencies, and design a Datalog query that can decide the existence of a winning strategy for the game. For guarded disjunctive TGDs, we can obtain Datalog rules with disjunction in the heads. However, the use of disjunction is limited, and the resulting rules fall into a fragment that can be evaluated in deterministic single exponential time. We proceed quite differently for the case when the TGDs are not disjunctive and we show that we can obtain a plain Datalog query. Notably, unlike previous translations for related fragments, our translation requires only polynomial time if the maximal number of variables in the (disjunctive) TGDs is bounded by a constant. Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8595Satisfiability for SCULPT-Schemas for CSV-Like Data
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8596
SCULPT is a simple schema language inspired by the recent working effort towards a recommendation by the World Wide Web Consortium (W3C) for tabular data and metadata on the Web. In its core, a SCULPT schema consists of a set of rules where left-hand sides select sets of regions in the tabular data and the right-hand sides describe the contents of these regions. A document (divided in cells by row- and column-delimiters) then satisfies a schema if it satisfies every rule. In this paper, we study the satisfiability problem for SCULPT schemas. As SCULPT describes grid-like structures, satisfiability obviously becomes undecidable rather quickly even for very restricted schemas. We define a schema language called L-SCULPT (Lego SCULPT) that restricts the walking power of SCULPT by selecting rectangular shaped areas and only considers tables for which selected regions do not intersect. Depending on the axes used by L-SCULPT, we show that satisfiability is PTIME-complete or undecidable. One of the tractable fragments is practically useful as it extends the structural core of the current W3C proposal for schemas over tabular data. We therefore see how the navigational power of the W3C proposal can be extended while still retaining tractable satisfiability tests.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8596Querying the Unary Negation Fragment with Regular Path Expressions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8597
The unary negation fragment of first-order logic (UNFO) has recently been proposed as a generalization of modal logic that shares many of its good computational and model-theoretic properties. It is attractive from the perspective of database theory because it can express conjunctive queries (CQs) and ontologies formulated in many description logics (DLs). Both are relevant for ontology-mediated querying and, in fact, CQ evaluation under UNFO ontologies (and thus also under DL ontologies) can be `expressed' in UNFO as a satisfiability problem. In this paper, we consider the natural extension of UNFO with regular expressions on binary relations. The resulting logic UNFOreg can express (unions of) conjunctive two-way regular path queries (C2RPQs) and ontologies formulated in DLs that include transitive roles and regular expressions on roles. Our main results are that evaluating C2RPQs under UNFOreg ontologies is decidable, 2ExpTime-complete in combined complexity, and coNP-complete in data complexity, and that satisfiability in UNFOreg is 2ExpTime-complete, thus not harder than in UNFO.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8597Enumeration Complexity of Conjunctive Queries with Functional Dependencies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8598
We study the complexity of enumerating the answers of Conjunctive Queries (CQs) in the presence of Functional Dependencies (FDs). Our focus is on the ability to list output tuples with a constant delay in between, following a linear-time preprocessing. A known dichotomy classifies the acyclic self-join-free CQs into those that admit such enumeration, and those that do not. However, this classification no longer holds in the common case where the database exhibits dependencies among attributes. That is, some queries that are classified as hard are in fact tractable if dependencies are accounted for. We establish a generalization of the dichotomy to accommodate FDs; hence, our classification determines which combination of a CQ and a set of FDs admits constant-delay enumeration with a linear-time preprocessing.In addition, we generalize a hardness result for cyclic CQs to accommodate a common type of FDs. Further conclusions of our development include a dichotomy for enumeration with linear delay, and a dichotomy for CQs with disequalities. Finally, we show that all our results apply to the known class of "cardinality dependencies" that generalize FDs (e.g., by stating an upper bound on the number of genres per movies, or friends per person).Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8598Answering UCQs under Updates and in the Presence of Integrity Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8599
We investigate the query evaluation problem for fixed queries overfully dynamic databases where tuples can be inserted or deleted.The task is to design a dynamic data structure that can immediatelyreport the new result of a fixed query after every database update.We consider unions of conjunctive queries (UCQs) and focus on the query evaluation tasks testing (decide whether an input tuple belongs to the query result), enumeration (enumerate, without repetition,all tuples in the query result), and counting (output the number of tuples in the query result).We identify three increasingly restrictive classes of UCQs which wecall t-hierarchical, q-hierarchical, and exhaustively q-hierarchical UCQs.Our main results provide the following dichotomies:If the query's homomorphic core is t-hierarchical (q-hierarchical,exhaustively q-hierarchical), then the testing (enumeration, counting)problem can be solved with constant update time and constant testing time (delay, counting time). Otherwise, it cannot be solved with sublinear update time and sublinear testing time (delay, counting time), unless the OV-conjecture and/or the OMv-conjecture fails. We also study the complexity of query evaluation in the dynamic setting in the presence of integrity constraints, and we obtain similar dichotomy results for the special case of small domain constraints (i.e., constraints which state that all values in a particular column of a relation belong to a fixed domain of constant size).Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8599On the Expressive Power of Query Languages for Matrices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8600
We investigate the expressive power of MATLANG, a formal language for matrix manipulation based on common matrix operations and linear algebra. The language can be extended with the operation inv of inverting a matrix. In MATLANG + inv we can compute the transitive closure of directed graphs, whereas we show that this is not possible without inversion. Indeed we show that the basic language can be simulated in the relational algebra with arithmetic operations, grouping, and summation. We also consider an operation eigen for diagonalizing a matrix, which is defined so that different eigenvectors returned for a same eigenvalue are orthogonal. We show that inv can be expressed in MATLANG + eigen. We put forward the open question whether there are boolean queries about matrices, or generic queries about graphs, expressible in MATLANG + eigen but not in MATLANG + inv. The evaluation problem for MATLANG + eigen is shown to be complete for the complexity class Exists R.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8600Preserving Constraints with the Stable Chase
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8601
Conjunctive query answering over databases with constraints – also known as (tuple-generating) dependencies – is considered a central database task. To this end, several versions of a construction called chase have been described. Given a set Sigma of dependencies, it is interesting to ask which constraints not contained in Sigma that are initially satisfied in a given database instance are preserved when computing a chase over Sigma. Such constraints are an example for the more general class of incidental constraints, which when added to Sigma as new dependencies do not affect certain answers and might even speed up query answering. After formally introducing incidental constraints, we show that deciding incidentality is undecidable for tuple-generating dependencies, even in cases for which query entailment is decidable. For dependency sets with a finite universal model, the core chase can be used to decide incidentality. For the infinite case, we propose the stable chase, which generalises the core chase, and study its relation to incidental constraints.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8601A More General Theory of Static Approximations for Conjunctive Queries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8602
Conjunctive query (CQ) evaluation is NP-complete, but becomes tractable for fragments of bounded hypertreewidth. If a CQ is hard to evaluate, it is thus useful to evaluate an approximation of it in such fragments. While underapproximations (i.e., those that return correct answers only) are well-understood, the dual notion of overapproximations that return complete (but not necessarily sound) answers, and also a more general notion of approximation based on the symmetric difference of query results, are almost unexplored. In fact, the decidability of the basic problems of evaluation, identification, and existence of those approximations, is open. We develop a connection with existential pebble game tools that allows the systematic study of such problems. In particular, we show that the evaluation and identification of overapproximations can be solved in polynomial time. We also make progress in the problem of existence of overapproximations, showing it to be decidable in 2EXPTIME over the class of acyclic CQs. Furthermore, we look at when overapproximations do not exist, suggesting that this can be alleviated by using a more liberal notion of overapproximation. We also show how to extend our tools to study symmetric difference approximations. We observe that such approximations properly extend under- and over-approximations, settle the complexity of its associated identification problem, and provide several results on existence and evaluation. Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8602Distribution Policies for Datalog
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8603
Modern data management systems extensively use parallelism to speed up query processing over massive volumes of data. This trend has inspired a rich line of research on how to formally reason about the parallel complexity of join computation. In this paper, we go beyond joins and study the parallel evaluation of recursive queries. We introduce a novel framework to reason about multi-round evaluation of Datalog programs, which combines implicit predicate restriction with distribution policies to allow expressing a combination of data-parallel and query-parallel evaluation strategies. Using our framework, we reason about key properties of distributed Datalog evaluation, including parallel-correctness of the evaluation strategy, disjointness of the computation effort, and bounds on the number of communication rounds.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8603Parallel-Correctness and Transferability for Conjunctive Queries under Bag Semantics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8604
Single-round multiway join algorithms first reshuffle data over many servers and then evaluate the query at hand in a parallel and communication-free way. A key question is whether a given distribution policy for the reshuffle is adequate for computing a given query. This property is referred to as parallel-correctness. Another key problem is to detect whether the data reshuffle step can be avoided when evaluating subsequent queries. The latter problem is referred to as transfer of parallel-correctness. This paper extends the study of parallel-correctness and transfer of parallel-correctness of conjunctive queries to incorporate bag semantics. We provide semantical characterizations for both problems, obtain complexity bounds and discuss the relationship with their set semantics counterparts. Finally, we revisit both problems under a modified distribution model that takes advantage of a linear order on compute nodes and obtain tight complexity bounds.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8604Massively Parallel Entity Matching with Linear Classification in Low Dimensional Space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8605
In entity matching classification, we are given two sets R and S of objects where whether r and s form a match is known for each pair (r, s) in R x S. If R and S are subsets of domains D(R) and D(S) respectively, the goal is to discover a classifier function f: D(R) x D(S) -> {0, 1} from a certain class satisfying the property that, for every (r, s) in R x S, f(r, s) = 1 if and only if r and s are a match. Past research is accustomed to running a learning algorithm directly on all the labeled (i.e., match or not) pairs in R times S. This, however, suffers from the drawback that even reading through the input incurs a quadratic cost. We pursue a direction towards removing the quadratic barrier. Denote by T the set of matching pairs in R times S. We propose to accept R, S, and T as the input, and aim to solve the problem with cost proportional to |R|+|S|+|T|, thereby achieving a large performance gain in the (typical) scenario where |T|<<|R||S|. This paper provides evidence on the feasibility of the new direction, by showing how to accomplish the aforementioned purpose for entity matching with linear classification, where a classifier is a linear multi-dimensional plane separating the matching and non-matching pairs. We actually do so in the MPC model, echoing the trend of deploying massively parallel computing systems for large-scale learning. As a side product, we obtain new MPC algorithms for three geometric problems: linear programming, batched range counting, and dominance join. Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8605Enumeration on Trees under Relabelings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8606
We study how to evaluate MSO queries with free variables on trees, within theframework of enumeration algorithms. Previous work has shown how to enumerateanswers with linear-time preprocessing and delay linear in the size of eachoutput, i.e., constant-delay for free first-order variables. We extend thisresult to support relabelings, a restricted kind of update operations ontrees which allows us to change the node labels. Our main result shows that wecan enumerate the answers of MSO queries on trees with linear-time preprocessingand delay linear in each answer, while supporting node relabelings in logarithmic time. Toprove this, we reuse the circuit-based enumeration structure from our earlierwork, and develop techniques to maintain its index under node relabelings. Wealso show how enumeration under relabelings can be applied to evaluate practicalquery languages, such as aggregate, group-by, and parameterized queries.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8606Expressivity and Complexity of MongoDB Queries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8607
In this paper, we consider MongoDB, a widely adopted but not formally understood database system managing JSON documents and equipped with a powerful query mechanism, called the aggregation framework. We provide a clean formal abstraction of this query language, which we call MQuery. We study the expressivity of MQuery, showing the equivalence of its well-typed fragment with nested relational algebra. We further investigate the computational complexity of significant fragments of it, obtaining several (tight) bounds in combined complexity, which range from LogSpace to alternating exponential-time with a polynomial number of alternations.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8607Connecting Width and Structure in Knowledge Compilation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8608
Several query evaluation tasks can be done via knowledge compilation: the query result is compiled as a lineage circuit from which the answer can be determined. For such tasks, it is important to leverage some width parameters of the circuit, such as bounded treewidth or pathwidth, to convert the circuit to structured classes, e.g., deterministic structured NNFs (d-SDNNFs) or OBDDs. In this work, we show how to connect the width of circuits to the size of their structured representation, through upper and lower bounds. For the upper bound, we show how bounded-treewidth circuits can be converted to a d-SDNNF, in time linear in the circuit size. Our bound, unlike existing results, is constructive and only singly exponential in the treewidth. We show a related lower bound on monotone DNF or CNF formulas, assuming a constant bound on the arity (size of clauses) and degree (number of occurrences of each variable). Specifically, any d-SDNNF (resp., SDNNF) for such a DNF (resp., CNF) must be of exponential size in its treewidth; and the same holds for pathwidth when compiling to OBDDs. Our lower bounds, in contrast with most previous work, apply to any formula of this class, not just a well-chosen family. Hence, for our language of DNF and CNF, pathwidth and treewidth respectively characterize the efficiency of compiling to OBDDs and (d-)SDNNFs, that is, compilation is singly exponential in the width parameter. We conclude by applying our lower bound results to the task of query evaluation.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8608Fast Sketch-based Recovery of Correlation Outliers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8609
Many data sources can be interpreted as time-series, and a key problem is to identify which pairs out of a large collection of signals are highly correlated. We expect that there will be few, large, interesting correlations, while most signal pairs do not have any strong correlation. We abstract this as the problem of identifying the highly correlated pairs in a collection of n mostly pairwise uncorrelated random variables, where observations of the variables arrives as a stream. Dimensionality reduction can remove dependence on the number of observations, but further techniques are required to tame the quadratic (in n) cost of a search through all possible pairs.We develop a new algorithm for rapidly finding large correlations based on sketch techniques with an added twist: we quickly generate sketches of random combinations of signals, and use these in concert with ideas from coding theory to decode the identity of correlated pairs. We prove correctness and compare performance and effectiveness with the best LSH (locality sensitive hashing) based approach.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8609Covers of Query Results
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8610
We introduce succinct lossless representations of query results called covers. They are subsets of the query results that correspond to minimal edge covers in the hypergraphs of these results.We first study covers whose structures are given by fractional hypertree decompositions of join queries. For any decomposition of a query, we give asymptotically tight size bounds for the covers of the query result over that decomposition and show that such covers can be computed in worst-case optimal time up to a logarithmic factor in the database size. For acyclic join queries, we can compute covers compositionally using query plans with a new operator called cover-join. The tuples in the query result can be enumerated from any of its covers with linearithmic pre-computation time and constant delay.We then generalize covers from joins to functional aggregate queries that express a host of computational problems such as aggregate-join queries, in-database optimization, matrix chain multiplication, and inference in probabilistic graphical models.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8610An Update on Dynamic Complexity Theory
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8611
In many modern data management scenarios, data is subject to frequent changes. In order to avoid costly re-computing query answers from scratch after each small update, one can try to use auxiliary relations that have been computed before. Of course, the auxiliary relations need to be updated dynamically whenever the data changes.Dynamic complexity theory studies which queries and auxiliary relations can be updated in a highly parallel fashion, that is, by constant-depth circuits or, equivalently, by first-order formulasor the relational algebra. After gently introducing dynamic complexity theory, I will discuss recent results of the area with a focus on the dynamic complexity of the reachability query.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8611Join Algorithms: From External Memory to the BSP
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8612
Database systems have been traditionally disk-based, which had motivated the extensive study on external memory (EM) algorithms. However, as RAMs continue to get larger and cheaper, modern distributed data systems are increasingly adopting a main memory based, shared-nothing architecture, exemplified by systems like Spark and Flink. These systems can be abstracted by the BSP model (with variants like the MPC model and the MapReduce model), and there has been a strong revived interest in designing BSP algorithms for handling large amounts of data.With hard disks starting to fade away from the picture, EM algorithms may now seem less relevant. However, we observe that many of the recently developed join algorithms under the BSP model have a high degree of resemblance with their counterparts in the EM model. In this talk, I will present some recent results on join algorithms in the EM and BSP model, examine their relationships, and discuss a general theoretical framework for converting EM algorithms tothe BSP.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8612Fine-grained Algorithms and Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8613
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in O(t(n)^{1-epsilon}) time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of their classical algorithms from the 1950s and 1960s.Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P neq NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem is already solvable in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. Mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require essentially t(n) time for some t, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes.The main key problems used to base hardness on have been: the 3SUM problem, the CNF-SAT problem (based on the Strong Exponential Time Hypothesis (SETH)) and the All Pairs Shortest Paths Problem. Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance, Longest Common Subsequence etc.In this talk I will give an overview of the current progress in this area of study, and will highlight some exciting new developments and their relationship to database theory.Wed, 28 Feb 2018 15:29:17 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8613Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8574
Front Matter, Table of Contents, Preface, Conference OrganizationMon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8574Approximate Reversal of Quantum Gaussian Dynamics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8575
Recently, there has been focus on determining the conditions under which the data processing inequality for quantum relative entropy is satisfied with approximate equality. The solution of the exact equality case is due to Petz, who showed that the quantum relative entropy between two quantum states stays the same after the action of a quantum channel if and only if there is a reversal channel that recovers the original states after the channel acts. Furthermore, this reversal channel can be constructed explicitly and is now called the Petz recovery map. Recent developments have shown that a variation of the Petz recovery map works well for recovery in the case of approximate equality of the data processing inequality. Our main contribution here is a proof that bosonic Gaussian states and channels possess a particular closure property, namely, that the Petz recovery map associated to a bosonic Gaussian state \sigma and a bosonic Gaussian channel N is itself a bosonic Gaussian channel. We furthermore give an explicit construction of the Petz recovery map in this case, in terms of the mean vector and covariance matrix of the state \sigma and the Gaussian specification of the channel N.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8575Quantum Coin Hedging, and a Counter Measure
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8576
A quantum board game is a multi-round protocol between a single quantum player against the quantum board. Molina and Watrous discovered quantum hedging. They gave an example for perfect quantum hedging: a board game with winning probability < 1, such that the player can win with certainty at least 1-out-of-2 quantum board games played in parallel. Here we show that perfect quantum hedging occurs in a cryptographic protocol – quantum coin flipping. For this reason, when cryptographic protocols are composed in parallel, hedging may introduce serious challenges into their analysis.We also show that hedging cannot occur when playing two-outcome board games in sequence. This is done by showing a formula for the value of sequential two-outcome board games, which depends only on the optimal value of a single board game; this formula applies in a more general setting of possible target functions, in which hedging is only a special case.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8576The Complexity of Simulating Local Measurements on Quantum Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8577
An important task in quantum physics is the estimation of local quantities for ground states of local Hamiltonians. Recently, Ambainis defined the complexity class P^QMA[log], and motivated its study by showing that the physical task of estimating the expectation value of a local observable against the ground state of a local Hamiltonian is P^QMA[log]-complete. In this paper, we continue the study of P^QMA[log], obtaining the following results.The P^QMA[log]-completeness result of Ambainis requires O(log n)-local observ- ables and Hamiltonians. We show that simulating even a single qubit measurement on ground states of 5-local Hamiltonians is P^QMA[log]-complete, resolving an open question of Ambainis. We formalize the complexity theoretic study of estimating two-point correlation functions against ground states, and show that this task is similarly P^QMA[log]-complete.P^QMA[log] is thought of as "slightly harder" than QMA. We justify this formally by exploiting the hierarchical voting technique of Beigel, Hemachandra, and Wechsung to show P^QMA[log] \subseteq PP. This improves the containment QMA \subseteq PP from Kitaev and Watrous. A central theme of this work is the subtlety involved in the study of oracle classes in which the oracle solves a promise problem. In this vein, we identify a flaw in Ambainis' prior work regarding a P^UQMA[log]-hardness proof for estimating spectral gaps of local Hamiltonians. By introducing a "query validation" technique, we build on his prior work to obtain P^UQMA[log]-hardness for estimating spectral gaps under polynomial-time Turing reductions.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8577 Quantum Hedging in Two-Round Prover-Verifier Interactions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8578
We consider the problem of a particular kind of quantum correlation that arises in some two-party games. In these games, one player is presented with a question they must answer, yielding an outcome of either "win" or "lose". Molina and Watrous previously studied such a game that exhibited a perfect form of hedging, where the risk of losing a first game can completely offset the corresponding risk for a second game. This is a non-classical quantum phenomenon, and establishes the impossibility of performing strong error-reduction for quantum interactive proof systems by parallel repetition, unlike for classical interactive proof systems. We take a step in this article towards a better understanding of the hedging phenomenon by giving a complete characterization of when perfect hedging is possible for a natural generalization of the game in the prior work of Molina and Watrous. Exploring in a different direction the subject of quantum hedging, and motivated by implementation concerns regarding loss-tolerance, we also consider a variation of the protocol where the player who receives the question can choose to restart the game rather than return an answer. We show that in this setting there is no possible hedging for any game played with state spaces corresponding to finite-dimensional complex Euclidean spaces.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8578Multiparty Quantum Communication Complexity of Triangle Finding
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8579
Triangle finding (deciding if a graph contains a triangle or not) is a central problem in quantum query complexity. The quantum communication complexity of this problem, where the edges of the graph are distributed among the players, was considered recently by Ivanyos et al. in the two- party setting. In this paper we consider its k-party quantum communication complexity with k >= 3. Our main result is a ~O(m^(7/12))-qubit protocol, for any constant number of players k, deciding with high probability if a graph with m edges contains a triangle or not. Our approach makes connections between the multiparty quantum communication complexity of triangle finding and the quantum query complexity of graph collision, a well-studied problem in quantum query complexity.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8579A Single Entangled System Is an Unbounded Source of Nonlocal Correlations and of Certified Random Numbers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8580
The outcomes of local measurements made on entangled systems can be certified to be random provided that the generated statistics violate a Bell inequality. This way of producing randomness relies only on a minimal set of assumptions because it is independent of the internal functioning of the devices generating the random outcomes. In this context it is crucial to understand both qualitatively and quantitatively how the three fundamental quantities – entanglement, non-locality and randomness – relate to each other. To explore these relationships, we consider the case where repeated (non projective) measurements are made on the physical systems, each measurement being made on the post-measurement state of the previous measurement. In this work, we focus on the following questions: Given a single entangled system, how many nonlocal correlations in a sequence can we obtain? And from this single entangled system, how many certified random numbers is it possible to generate? In the standard scenario with a single measurement in the sequence, it is possible to generate non-local correlations between two distant observers only and the amount of random numbers is very limited. Here we show that we can overcome these limitations and obtain any amount of certified random numbers from a single entangled pair of qubit in a pure state by making sequences of measurements on it. Moreover, the state can be arbitrarily weakly entangled. In addition, this certification is achieved by near-maximal violation of a particular Bell inequality for each measurement in the sequence. We also present numerical results giving insight on the resistance to imperfections and on the importance of the strength of the measurements in our scheme.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8580Provably Secure Key Establishment Against Quantum Adversaries
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8581
At Crypto 2011, some of us had proposed a family of cryptographic protocols for key establishment capable of protecting quantum and classical legitimate parties unconditionally against a quantum eavesdropper in the query complexity model. Unfortunately, our security proofs were unsatisfactory from a cryptographically meaningful perspective because they were sound only in a worst-case scenario. Here, we extend our results and prove that for any \eps > 0, there is a classical protocol that allows the legitimate parties to establish a common key after O(N) expected queries to a random oracle, yet any quantum eavesdropper will have a vanishing probability of learning their key after O(N^(1.5-\eps)) queries to the same oracle. The vanishing probability applies to a typical run of the protocol. If we allow the legitimate parties to use a quantum computer as well, their advantage over the quantum eavesdropper becomes arbitrarily close to the quadratic advantage that classical legitimate parties enjoyed over classical eavesdroppers in the seminal 1974 work of Ralph Merkle. Along the way, we develop new tools to give lower bounds on the number of quantum queries required to distinguish two probability distributions. This method in itself could have multiple applications in cryptography. We use it here to study average-case quantum query complexity, for which we develop a new composition theorem of independent interest.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8581Minimum Quantum Resources for Strong Non-Locality
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8582
We analyse the minimum quantum resources needed to realise strong non-locality, as exemplified e.g. by the classical GHZ construction. It was already known that no two-qubit system, with any finite number of local measurements, can realise strong non-locality. For three-qubit systems, we show that strong non-locality can only be realised in the GHZ SLOCC class, and with equatorial measurements. However, we show that in this class there is an infinite family of states which are pairwise non LU-equivalent that realise strong non-locality with finitely many measurements. These states have decreasing entanglement between one qubit and the other two, necessitating an increasing number of local measurements on the latter.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8582Fidelity of Quantum Strategies with Applications to Cryptography
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8583
We introduce a definition of the fidelity function for multi-round quantum strategies, which we call the strategy fidelity, that is a generalization of the fidelity function for quantum states. We provide many interesting properties of the strategy fidelity including a Fuchs-van de Graaf relationship with the strategy norm. We illustrate an operational interpretation of the strategy fidelity in the spirit of Uhlmann's Theorem and discuss its application to the security analysis of quantum protocols for interactive cryptographic tasks such as bit-commitment and oblivious string transfer. Our analysis is very general in the sense that the actions of the protocol need not be fully specified, which is in stark contrast to most other security proofs. Lastly, we provide a semidefinite programming formulation of the strategy fidelity.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8583Improved reversible and quantum circuits for Karatsuba-based integer multiplication
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8584
Integer arithmetic is the underpinning of many quantum algorithms, with applications ranging from Shor's algorithm over HHL for matrix inversion to Hamiltonian simulation algorithms. A basic objective is to keep the required resources to implement arithmetic as low as possible. This applies in particular to the number of qubits required in the implementation as for the foreseeable future this number is expected to be small. We present a reversible circuit for integer multiplication that is inspired by Karatsuba's recursive method. The main improvement over circuits that have been previously reported in the literature is an asymptotic reduction of the amount of space required from O(n^1.585) to O(n^1.427). This improvement is obtained in exchange for a small constant increase in the number of operations by a factor less than 2 and a small asymptotic increase in depth for the parallel version. The asymptotic improvement are obtained from analyzing pebble games on complete ternary trees.Mon, 26 Feb 2018 07:53:33 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8584Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8543
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8543On the Parameterized Complexity of Contraction to Generalization of Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8544
For a family of graphs F, the F-Contraction problem takes as an input a graph G and an integer k, and the goal is to decide if there exists S \subseteq E(G) of size at most k such that G/S belongs to F. Here, G/S is the graph obtained from G by contracting all the edges in S. Heggernes et al.[Algorithmica (2014)] were the first to study edge contraction problems in the realm of Parameterized Complexity. They studied \cal F-Contraction when F is a simple family of graphs such as trees and paths. In this paper, we study the F-Contraction problem, where F generalizes the family of trees. In particular, we define this generalization in a "parameterized way". Let T_\ell be the family of graphs such that each graph in T_\ell can be made into a tree by deleting at most \ell edges. Thus, the problem we study is T_\ell-Contraction. We design an FPT algorithm for T_\ell-Contraction running in time O((\ncol)^{O(k + \ell)} * n^{O(1)}). Furthermore, we show that the problem does not admit a polynomial kernel when parameterized by k. Inspired by the negative result for the kernelization, we design a lossy kernel for T_\ell-Contraction of size O([k(k + 2\ell)] ^{(\lceil {\frac{\alpha}{\alpha-1}\rceil + 1)}}).Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8544Solving and Sampling with Many Solutions: Satisfiability and Other Hard Problems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8545
We investigate parameterizing hard combinatorial problems by the size of the solution set compared to all solution candidates. Our main result is a uniform sampling algorithm for satisfying assignments of 2-CNF formulas that runs in expected time O^*(eps^{-0.617}) where eps is the fraction of assignments that are satisfying. This improves significantly over the trivial sampling bound of expected Theta^*(eps^{-1}), and on all previous algorithms whenever eps = Omega(0.708^n). We also consider algorithms for 3-SAT with an eps fraction of satisfying assignments, and prove that it can be solved in O^*(eps^{-2.27}) deterministic time, and in O^*(eps^{-0.936}) randomized time. Finally, to further demonstrate the applicability of this framework, we also explore how similar techniques can be used for vertex cover problems.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8545Merging Nodes in Search Trees: an Exact Exponential Algorithm for the Single Machine Total Tardiness Scheduling Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8546
This paper proposes an exact exponential algorithm for the problem of minimizing the total tardiness of jobs on a single machine. It exploits the structure of a basic branch-and-reduce framework based on the well known Lawler's decomposition property. The proposed algorithm, called branch-and-merge, is an improvement of the branch-and-reduce technique with the embedding of a node merging operation. Its time complexity is O*(2.247^n) keeping the space complexity polynomial. The branch-and-merge technique is likely to be generalized to other sequencing problems with similar decomposition properties.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8546Odd Multiway Cut in Directed Acyclic Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8547
We investigate the odd multiway node (edge) cut problem where the input is a graph with a specified collection of terminal nodes and the goal is to find a smallest subset of non-terminal nodes (edges) to delete so that the terminal nodes do not have an odd length path between them. In an earlier work, Lokshtanov and Ramanujan showed that both odd multiway node cut and odd multiway edge cut are fixed-parameter tractable (FPT) when parameterized by the size of the solution in undirected graphs. In this work, we focus on directed acyclic graphs (DAGs) and design a fixed-parameter algorithm. Our main contribution is an extension of the shadow-removal framework for parity problems in DAGs. We complement our FPT results with tight approximability as well as polyhedral results for 2 terminals in DAGs. Additionally, we show inapproximability results for odd multiway edge cut in undirected graphs even for 2 terminals.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8547Contraction-Bidimensionality of Geometric Intersection Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8548
Given a graph G, we define bcg(G) as the minimum k for which G can be contracted to the uniformly triangulated grid Gamma_k. A graph class G has the SQGC property if every graph G in G has treewidth O(bcg(G)c) for some 1 <= c < 2. The SQGC property is important for algorithm design as it defines the applicability horizon of a series of meta-algorithmic results, in the framework of bidimensionality theory, related to fast parameterized algorithms, kernelization, and approximation schemes. These results apply to a wide family of problems, namely problems that are contraction-bidimensional. Our main combinatorial result reveals a general family of graph classes that satisfy the SQGC property and includes bounded-degree string graphs. This considerably extends the applicability of bidimensionality theory for several intersection graph classes of 2-dimensional geometrical objects.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8548K-Best Solutions of MSO Problems on Tree-Decomposable Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8549
We show that, for any graph optimization problem in which the feasible solutions can be expressed by a formula in monadic second-order logic describing sets of vertices or edges and in which the goal is to minimize the sum of the weights in the selected sets, we can find the k best solution values for n-vertex graphs of bounded treewidth in time O(n + k log n). In particular, this applies to finding the k shortest simple paths between given vertices in directed graphs of bounded treewidth, giving an exponential speedup in the per-path cost over previous algorithms.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8549Optimal Data Reduction for Graph Coloring Using Low-Degree Polynomials
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8550
The theory of kernelization can be used to rigorously analyze data reduction for graph coloring problems. Here, the aim is to reduce a q-Coloring input to an equivalent but smaller input whose size is provably bounded in terms of structural properties, such as the size of a minimum vertex cover. In this paper we settle two open problems about data reduction for q-Coloring.First, we use a recent technique of finding redundant constraints by representing them as low-degree polynomials, to obtain a kernel of bitsize O(k^(q-1) log k) for q-Coloring parameterized by Vertex Cover for any q >= 3. This size bound is optimal up to k^o(1) factors assuming NP is not a subset of coNP/poly, and improves on the previous-best kernel of size O(k^q). Our second result shows that 3-Coloring does not admit non-trivial sparsification: assuming NP is not a subset of coNP/poly, the parameterization by the number of vertices n admits no (generalized) kernel of size O(n^(2-e)) for any e > 0. Previously, such a lower bound was only known for coloring with q >= 4 colors.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8550Treewidth with a Quantifier Alternation Revisited
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8551
In this paper we take a closer look at the parameterized complexity of \exists\forall SAT, the prototypical complete problem of the class Sigma_2^p, the second level of the polynomial hierarchy. We provide a number of tight fine-grained bounds on the complexity of this problem and its variants with respect to the most important structural graph parameters. Specifically, we show the following lower bounds (assuming the ETH):- It is impossible to decide \exists\forall SAT in time less than double-exponential in the input formula's treewidth. More strongly, we establish the same bound with respect to the formula's primal vertex cover, a much more restrictive measure. This lower bound, which matches the performance of known algorithms, shows that the degeneration of the performance of treewidth-based algorithms to a tower of exponentials already begins in problems with one quantifier alternation.- For the more general \exists\forall CSP problem over a non-boolean domain of size B, there is no algorithm running in time 2^{B^{o(vc)}}, where vc is the input's primal vertex cover.- \exists\forall SAT is already NP-hard even when the input formula has constant modular treewidth (or clique-width), indicating that dense graph parameters are less useful for problems in Sigma_2^p.- For the two weighted versions of \exists\forall SAT recently introduced by de Haan and Szeider, called \exists_k\forall SAT and \exists\forall_k SAT, we give tight upper and lower bounds parameterized by treewidth (or primal vertex cover) and the weight k. Interestingly, the complexity of these two problems turns out to be quite different: one is double-exponential in treewidth, while the other is double-exponential in k.We complement the above negative results by showing a double-exponential FPT algorithm for QBF parameterized by vertex cover, showing that for this parameter the complexity never goes beyond double-exponential, for any number of quantifier alternations.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8551An Output Sensitive Algorithm for Maximal Clique Enumeration in Sparse Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8552
The degeneracy of a graph G is the smallest integer k such that every subgraph of G contains a vertex of degree at most k. Given an n-order k-degenerate graph G, we present an algorithm for enumerating all its maximal cliques. Assuming that c is the number of maximal cliques of G, our algorithm has setup time O(n(k^2+s(k+1))) and enumeration time cO((k+1)f(k+1)) where s(k+1) (resp. f(k+1)) is the preprocessing time (resp. enumeration time) for maximal clique enumeration in a general (k+1)-order graph. This is the first output sensitive algorithm whose enumeration time depends only on the degeneracy of the graph.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8552The Dominating Set Problem in Geometric Intersection Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8553
We study the parameterized complexity of dominating sets in geometric intersection graphs. In one dimension, we investigate intersection graphs induced by translates of a fixed pattern Q that consists of a finite number of intervals and a finite number of isolated points. We prove that Dominating Set on such intersection graphs is polynomially solvable whenever Q contains at least one interval, and whenever Q contains no intervals and for any two point pairs in Q the distance ratio is rational. The remaining case where Q contains no intervals but does contain an irrational distance ratio is shown to be NP-complete and contained in FPT (when parameterized by the solution size). In two and higher dimensions, we prove that Dominating Set is contained in W[1] for intersection graphs of semi-algebraic sets with constant description complexity. This generalizes known results from the literature. Finally, we establish W[1]-hardness for a large class of intersection graphs.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8553Parameterized Complexity of Finding a Spanning Tree with Minimum Reload Cost Diameter
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8554
We study the minimum diameter spanning tree problem under the reload cost model (DIAMETER-TREE for short) introduced by Wirth and Steffan (2001). In this problem, given an undirected edge-colored graph G, reload costs on a path arise at a node where the path uses consecutive edges of different colors. The objective is to find a spanning tree of G of minimum diameter with respect to the reload costs. We initiate a systematic study of the parameterized complexity of the DIAMETER-TREE problem by considering the following parameters: the cost of a solution, and the treewidth and the maximum degree Delta of the input graph. We prove that DIAMETER-TREE is para-np-hard for any combination of two of these three parameters, and that it is FPT parameterized by the three of them. We also prove that the problem can be solved in polynomial time on cactus graphs. This result is somehow surprising since we prove DIAMETER-TREE to be NP-hard on graphs of treewidth two, which is best possible as the problem can be trivially solved on forests. When the reload costs satisfy the triangle inequality, Wirth and Steffan (2001) proved that the problem can be solved in polynomial time on graphs with Delta=3, and Galbiati (2008) proved that it is NP-hard if Delta=4. Our results show, in particular, that without the requirement of the triangle inequality, the problem is NP-hard if Delta=3, which is also best possible. Finally, in the case where the reload costs are polynomially bounded by the size of the input graph, we prove that DIAMETER-TREE is in XP and W[1]-hard parameterized by the treewidth plus Delta.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8554Optimal Algorithms for Hitting (Topological) Minors on Graphs of Bounded Treewidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8555
For a fixed collection of graphs F, the F-M-DELETION problem consists in, given a graph G and an integer k, decide whether there exists a subset S of V(G) of size at most k such that G-S does not contain any of the graphs in F as a minor. We are interested in the parameterized complexity of F-M-DELETION when the parameter is the treewidth of G, denoted by tw. Our objective is to determine, for a fixed F}, the smallest function f_F such that F-M-DELETION can be solved in time f_F(tw)n^{O(1)} on n-vertex graphs. Using and enhancing the machinery of boundaried graphs and small sets of representatives introduced by Bodlaender et al. [J ACM, 2016], we prove that when all the graphs in F are connected and at least one of them is planar, then f_F(w) = 2^{O(wlog w)}. When F is a singleton containing a clique, a cycle, or a path on i vertices, we prove the following asymptotically tight bounds:- f_{K_4}(w) = 2^{Theta(wlog w)}.- f_{C_i}(w) = 2^{Theta(w)} for every i<5, and f_{C_i}(w) = 2^{Theta(wlog w)} for every i>4.- f_{P_i}(w) = 2^{Theta(w)} for every i<5, and f_{P_i}(w) = 2^{Theta(wlog w)} for every i>5.The lower bounds hold unless the Exponential Time Hypothesis fails, and the superexponential ones are inspired by a reduction of Marcin Pilipczuk [Discrete Appl Math, 2016]. The single-exponential algorithms use, in particular, the rank-based approach introduced by Bodlaender et al. [Inform Comput, 2015]. We also consider the version of the problem where the graphs in F are forbidden as topological minors, and prove essentially the same set of results holds.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8555How Much Does a Treedepth Modulator Help to Obtain Polynomial Kernels Beyond Sparse Graphs?
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8556
In the last years, kernelization with structural parameters has been an active area of research within the field of parameterized complexity. As a relevant example, Gajarsky et al. [ESA 2013] proved that every graph problem satisfying a property called finite integer index admits a linear kernel on graphs of bounded expansion and an almost linear kernel on nowhere dense graphs, parameterized by the size of a c-treedepth modulator, which is a vertex set whose removal results in a graph of treedepth at most c for a fixed integer c>0. The authors left as further research to investigate this parameter on general graphs, and in particular to find problems that, while admitting polynomial kernels on sparse graphs, behave differently on general graphs.In this article we answer this question by finding two very natural such problems: we prove that VERTEX COVER admits a polynomial kernel on general graphs for any integer c>0, and that DOMINATING SET does not for any integer c>1 even on degenerate graphs, unless NP is a subset of coNP/poly. For the positive result, we build on the techniques of Jansen and Bodlaender [STACS 2011], and for the negative result we use a polynomial parameter transformation for c>2 and an OR-cross-composition for c=2. As existing results imply that DOMINATING SET admits a polynomial kernel on degenerate graphs for c=1, our result provides a dichotomy about the existence of polynomial problems for DOMINATING SET on degenerate graphs with this parameter.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8556Turing Kernelization for Finding Long Paths in Graphs Excluding a Topological Minor
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8557
The notion of Turing kernelization investigates whether a polynomial-time algorithm can solve an NP-hard problem, when it is aided by an oracle that can be queried for the answers to bounded-size subproblems. One of the main open problems in this direction is whether k-PATH admits a polynomial Turing kernel: can a polynomial-time algorithm determine whether an undirected graph has a simple path of length k, using an oracle that answers queries of size k^{O(1)}?We show this can be done when the input graph avoids a fixed graph H as a topological minor, thereby significantly generalizing an earlier result for bounded-degree and K_{3,t}-minor-free graphs. Moreover, we show that k-PATH even admits a polynomial Turing kernel when the input graph is not H-topological-minor-free itself, but contains a known vertex modulator of size bounded polynomially in the parameter, whose deletion makes it so. To obtain our results, we build on the graph minors decomposition to show that any H-topological-minor-free graph that does not contain a k-path has a separation that can safely be reduced after communication with the oracle.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8557The PACE 2017 Parameterized Algorithms and Computational Experiments Challenge: The Second Iteration
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8558
In this article, the Program Committee of the Second Parameterized Algorithms and Computational Experiments challenge (PACE 2017) reports on the second iteration of the PACE challenge. Track A featured the Treewidth problem and Track B the Minimum Fill-In problem. Over 44 participants on 17 teams from 11 countries submitted their implementations to the competition.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8558FO Model Checking of Geometric Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8559
Over the past two decades the main focus of research into first-order (FO) model checking algorithms has been on sparse relational structures – culminating in the FPT algorithm by Grohe, Kreutzer and Siebertz for FO model checking of nowhere dense classes of graphs. On contrary to that, except the case of locally bounded clique-width only little is currently known about FO model checking of dense classes of graphs or other structures. We study the FO model checking problem for dense graph classes definable by geometric means (intersection and visibility graphs). We obtain new nontrivial FPT results, e.g., for restricted subclasses of circular-arc, circle, box, disk, and polygon-visibility graphs. These results use the FPT algorithm by Gajarský et al. for FO model checking of posets of bounded width. We also complement the tractability results by related hardness reductions.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8559An Exponential Lower Bound for Cut Sparsifiers in Planar Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8560
Given an edge-weighted graph G with a set Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the sizes of minimum cuts between anypartition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class.In this note we show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require 2^(k-2) edges in any mimicking network. This nearly matches an upper bound of O(k * 2^(2k)) of Krauthgamer and Rika [SODA 2013, arXiv:1702.05951] and is in sharp contrast with the O(k^2) upper bound under the assumption that all terminals lie on a single face [Goranci, Henzinger, Peng, arXiv:1702.01136]. As a side result we show a hard instance for the double-exponential upper bounds given by Hagerup, Katajainen, Nishimura, and Ragde [JCSS 1998], Khan and Raghavendra [IPL 2014], and Chambers and Eppstein [JGAA 2013].Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8560A Fixed-Parameter Perspective on #BIS
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8561
The problem of (approximately) counting the independent sets of a bipartite graph (#BIS) is the canonical approximate counting problem that is complete in the intermediate complexity class #RHPi_1. It is believed that #BIS does not have an efficient approximation algorithm but also that it is not NP-hard. We study the robustness of the intermediate complexity of #BIS by considering variants of the problem parameterised by the size of the independent set. We map the complexity landscape for three problems, with respect to exact computation and approximation and with respect to conventional and parameterised complexity. The three problems are counting independent sets of a given size, counting independent sets with a given number of vertices in one vertex class and counting maximum independent sets amongst those with a given number of vertices in one vertex class. Among other things, we show that all of these problems are NP-hard to approximate within any polynomial ratio. (This is surprising because the corresponding problems without the size parameter are complete in #RHPi_1, and hence are not believed to be NP-hard.) We also show that the first problem is #W[1]-hard to solve exactly but admits an FPTRAS, whereas the other two are W[1]-hard to approximate even within any polynomial ratio. Finally, we show that, when restricted to graphs of bounded degree, all three problems have efficient exact fixed-parameter algorithms. Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8561Finding Connected Secluded Subgraphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8562
Problems related to finding induced subgraphs satisfying given properties form one of the most studied areas within graph algorithms. Such problems have given rise to breakthrough results and led to development of new techniques both within the traditional P vs NP dichotomy and within parameterized complexity. The Pi-Subgraph problem asks whether an input graph contains an induced subgraph on at least k vertices satisfying graph property Pi. For many applications, it is desirable that the found subgraph has as few connections to the rest of the graph as possible, which gives rise to the Secluded Pi-Subgraph problem. Here, input k is the size of the desired subgraph, and input t is a limit on the number of neighbors this subgraph has in the rest of the graph. This problem has been studied from a parameterized perspective, and unfortunately it turns out to be W[1]-hard for many graph properties Pi, even when parameterized by k+t. We show that the situation changes when we are looking for a connected induced subgraph satisfying Pi. In particular, we show that the Connected Secluded Pi-Subgraph problem is FPT when parameterized by just t for many important graph properties Pi. Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8562Smaller Parameters for Vertex Cover Kernelization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8563
We revisit the topic of polynomial kernels for Vertex Cover relative to structural parameters. Our starting point is a recent paper due to Fomin and Strømme [WG 2016] who gave a kernel with O(|X|^{12}) vertices when X is a vertex set such that each connected component of G-X contains at most one cycle, i.e., X is a modulator to a pseudoforest. We strongly generalize this result by using modulators to d-quasi-forests, i.e., graphs where each connected component has a feedback vertex set of size at most d, and obtain kernels with O(|X|^{3d+9}) vertices. Our result relies on proving that minimal blocking sets in a d-quasi-forest have size at most d+2. This bound is tight and there is a related lower bound of O(|X|^{d+2-epsilon}) on the bit size of kernels.In fact, we also get bounds for minimal blocking sets of more general graph classes: For d-quasi-bipartite graphs, where each connected component can be made bipartite by deleting at most d vertices, we get the same tight bound of d+2 vertices. For graphs whose connected components each have a vertex cover of cost at most d more than the best fractional vertex cover, which we call d-quasi-integral, we show that minimal blocking sets have size at most 2d+2, which is also tight. Combined with existing randomized polynomial kernelizations this leads to randomized polynomial kernelizations for modulators to d-quasi-bipartite and d-quasi-integral graphs. There are lower bounds of O(|X|^{d+2-epsilon}) and O(|X|^{2d+2-epsilon}) for the bit size of such kernels.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8563Polynomial-Time Algorithms for the Longest Induced Path and Induced Disjoint Paths Problems on Graphs of Bounded Mim-Width
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8564
We give the first polynomial-time algorithms on graphs of bounded maximum induced matching width (mim-width) for problems that are not locally checkable. In particular, we give n^O(w)-time algorithms on graphs of mim-width at most w, when given a decomposition, for the following problems: Longest Induced Path, Induced Disjoint Paths and H-Induced Topological Minor for fixed H. Our results imply that the following graph classes have polynomial-time algorithms for these three problems: Interval and Bi-Interval graphs, Circular Arc, Per- mutation and Circular Permutation graphs, Convex graphs, k-Trapezoid, Circular k-Trapezoid, k-Polygon, Dilworth-k and Co-k-Degenerate graphs for fixed k.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8564Generalized Feedback Vertex Set Problems on Bounded-Treewidth Graphs: Chordality Is the Key to Single-Exponential Parameterized Algorithms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8565
It has long been known that Feedback Vertex Set can be solved in time 2^O(w log w)n^O(1) on graphs of treewidth w, but it was only recently that this running time was improved to 2^O(w)n^O(1), that is, to single-exponential parameterized by treewidth. We investigate which generalizations of Feedback Vertex Set can be solved in a similar running time. Formally, for a class of graphs P, Bounded P-Block Vertex Deletion asks, given a graph G on n vertices and positive integers k and d, whether G contains a set S of at most k verticessuch that each block of G-S has at most d vertices and is in P. Assuming that P is recognizable in polynomial time and satisfies a certain natural hereditary condition, we give a sharp characterization of when single-exponential parameterized algorithms are possible for fixed values of d:- if P consists only of chordal graphs, then the problem can be solved in time 2^O(wd^2) n^{O}(1), - if P contains a graph with an induced cycle of length ell>= 4, then the problem is not solvable in time 2^{o(w log w)} n^O(1)} even for fixed d=ell, unless the ETH fails.We also study a similar problem, called Bounded P-Component Vertex Deletion, where the target graphs have connected components of small size instead of having blocks of small size, and present analogous results.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8565An Improved Fixed-Parameter Algorithm for One-Page Crossing Minimization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8566
Book embedding is one of the most well-known graph drawing models and is extensively studied in the literature. The special case where the number of pages is one is of particular interest: an embedding in this case has a natural circular representation useful for visualization and graphs that can be embedded in one page without crossings form an important graph class, namely that of outerplanar graphs.In this paper, we consider the problem of minimizing the number of crossings in a one-page book embedding, which we call one-page crossing minimization. Here, we are given a graph G with n vertices together with a non-negative integer k and are asked whether G can be embedded into a single page with at most k crossings. Bannister and Eppstein (GD 2014) showed that this problem is fixed-parameter tractable. Their algorithm is derived through the application of Courcelle's theorem (on graph properties definable in the monadic second-order logic of graphs) and runs in f(L)n time, where L = 2^{O(k^2)} is the length of the formula defining the property that the one-page crossing number is at most k and f is a computable function without any known upper bound expressible as an elementary function. We give an explicit dynamic programming algorithm with a drastically improved running time of 2^{O(k log k)}n.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8566Computing Treewidth on the GPU
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8567
We present a parallel algorithm for computing the treewidth of a graph on a GPU. We implement this algorithm in OpenCL, and experimentally evaluate its performance. Our algorithm is based on an O*(2^n)-time algorithm that explores the elimination orderings of the graph using a Held-Karp like dynamic programming approach. We use Bloom filters to detect duplicate solutions.GPU programming presents unique challenges and constraints, such as constraints on the use of memory and the need to limit branch divergence. We experiment with various optimizations to see if it is possible to work around these issues. We achieve a very large speed up (up to 77x) compared to running the same algorithm on the CPU.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8567On the Parameterized Complexity of Red-Blue Points Separation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8568
We study the following geometric separation problem: Given a set R of red points and a set B of blue points in the plane, find a minimum-size set of lines that separate R from B. We show that, in its full generality, parameterized by the number of lines k in the solution, the problem is unlikely to be solvable significantly faster than the brute-force n^{O(k)}-time algorithm, where n is the total number of points. Indeed, we show that an algorithm running in time f(k)n^{o(k/log k)}, for any computable function f, would disprove ETH. Our reduction crucially relies on selecting lines from a set with a large number of different slopes (i.e., this number is not a function of k).Conjecturing that the problem variant where the lines are required to be axis-parallel is FPT in the number of lines, we show the following preliminary result. Separating R from B with a minimum-size set of axis-parallel lines is FPT in the size of either set, and can be solved in time O^*(9^{|B|}) (assuming that B is the smallest set).Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8568Finding Small Weight Isomorphisms with Additional Constraints is Fixed-Parameter Tractable
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8569
Lubiw showed that several variants of Graph Isomorphism are NP-complete, where the solutions are required to satisfy certain additional constraints [SICOMP 10, 1981]. One of these, called Isomorphism With Restrictions, is to decide for two given graphs X_1=(V,E_1) and X_2=(V,E_2) and a subset R\subseteq V\times V of forbidden pairs whether there is an isomorphism \pi from X_1 to X_2 such that i^\pi\ne j for all (i,j)\in R. We prove that this problem and several of its generalizations are in fact in \FPT:- The problem of deciding whether there is an isomorphism between two graphs that moves k vertices and satisfies Lubiw-style constraints is in FPT, with k and the size of R as parameters. The problem remains in FPT even if a conjunction of disjunctions of such constraints is allowed. As a consequence of the main result it follows that the problem to decide whether there is an isomorphism that moves exactly k vertices is in FPT. This solves a question left open in our article on exact weight automorphisms [STACS 2017].- When the number of moved vertices is unrestricted, finding isomorphisms that satisfy a CNF of Lubiw-style constraints can be solved in FPT with access to a GI oracle.- Checking if there is an isomorphism π between two graphs with complexity t is also in FPT with t as parameter, where the complexity of a permutation is the Cayley measure defined as the minimum number t such that \pi can be expressed as a product of t transpositions.- We consider a more general problem in which the vertex set of a graph X is partitioned into Red and Blue, and we are interested in an automorphism that stabilizes Red and Blue and moves exactly k vertices in Blue, where k is the parameter. This problem was introduced by [Downey and Fellows 1999], and we showed [STACS 2017] that it is W[1]-hard even with color classes of size 4 inside Red. Now, for color classes of size at most 3 inside Red, we show the problem is in FPT.In the non-parameterized setting, all these problems are NP-complete. Also, they all generalize in several ways the problem to decide whether there is an isomorphism between two graphs that moves at most k vertices, shown to be in FPT by Schweitzer [ESA 2011].Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8569Tight Conditional Lower Bounds for Longest Common Increasing Subsequence
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8570
We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X_1,...,X_k of length at most n, the task is to determine the length of the longest common subsequence of X_1,...,X_k that is also strictly increasing. Especially for the case of k=2 (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case.Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound to rule out O((nL)^{1-epsilon}) time algorithms for LCIS, where L denotes the solution size, and to rule out O(n^{k-epsilon}) time algorithms for k-LCIS. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8570Relativization and Interactive Proof Systems in Parameterized Complexity Theory
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8571
We introduce some classical complexity-theoretic techniques to Parameterized Complexity. First, we study relativization for the machine models that were used by Chen, Flum, and Grohe (2005) to characterize a number of parameterized complexity classes. Here we obtain a new and non-trivial characterization of the A-Hierarchy in terms of oracle machines, and parameterize a famous result of Baker, Gill, and Solovay (1975), by proving that, relative to specific oracles, FPT and A[1] can either coincide or differ (a similar statement holds for FPT and W[P]). Second, we initiate the study of interactive proof systems in the parameterized setting, and show that every problem in the class AW[SAT] has a proof system with "short" interactions, in the sense that the number of rounds is upper-bounded in terms of the parameter value alone.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8571Generalized Kakeya Sets for Polynomial Evaluation and Faster Computation of Fermionants
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8572
We present two new data structures for computing values of an n-variate polynomial P of degree at most d over a finite field of q elements. Assuming that d divides q-1, our first data structure relies on (d+1)^{n+2} tabulated values of P to produce the value of P at any of the q^n points using O(nqd^2) arithmetic operations in the finite field. Assuming that s divides d and d/s divides q-1, our second data structure assumes that P satisfies a degree-separability condition and relies on (d/s+1)^{n+s} tabulated values to produce the value of P at any point using O(nq^ssq) arithmetic operations. Our data structures are based on generalizing upper-bound constructions due to Mockenhaupt and Tao (2004), Saraf and Sudan (2008), and Dvir (2009) for Kakeya sets in finite vector spaces from linear to higher-degree polynomial curves.As an application we show that the new data structures enable a faster algorithm for computing integer-valued fermionants, a family of self-reducible polynomial functions introduced by Chandrasekharan and Wiese (2011) that captures numerous fundamental algebraic and combinatorial invariants such as the determinant, the permanent, the number of Hamiltonian cycles in a directed multigraph, as well as certain partition functions of strongly correlated electron systems in statistical physics. In particular, a corollary of our main theorem for fermionants is that the permanent of an m-by-m integer matrix with entries bounded in absolute value by a constant can be computed in time 2^{m-Omega(sqrt(m/log log m))}, improving an earlier algorithm of Bjorklund (2016) that runs in time 2^{m-Omega(sqrt(m/log m))}.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8572DynASP2.5: Dynamic Programming on Tree Decompositions in Action
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8573
Efficient, exact parameterized algorithms are a vibrant theoretical research area. Recent solving competitions, such as the PACE challenge, show that there is also increasing practical interest in the parameterized algorithms community. An important research question is whether such algorithms can be built to efficiently solve specific problems in practice, that is, to be competitive with established solving systems. In this paper, we consider Answer Set Programming (ASP), a logic-based declarative modeling and problem solving framework. State-of-the-art ASP solvers generally rely on SAT-based algorithms. In addition, DynASP2, an ASP solver that is based on a classical dynamic programming on tree decompositions, has recently been introduced. DynASP2 outperforms modern ASP solvers when the goal is to count the number of solutions of programs that have small treewidth. However, for quickly finding one solutions, DynASP2 proved uncompetitive. In this paper, we present a new algorithm and implementation, called DynASP2.5, that shows competitive behavior compared to state-of-the-art ASP solvers on problems like Steiner tree for low-treewidth graphs, even when thetask is to find just one solution. Our implementation is based on a novel approach that we call multi-pass dynamic programming.Fri, 23 Feb 2018 14:22:47 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8573OASIcs, Volume 60, ICCSW'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8542
OASIcs, Volume 60, ICCSW'17, Complete VolumeWed, 21 Feb 2018 11:53:40 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8542OASIcs, Volume 58, ICLP'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8540
OASIcs, Volume 58, ICLP'17, Complete VolumeWed, 21 Feb 2018 11:19:20 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8540LIPIcs, Volume 93, FSTTCS'17, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8539
LIPIcs, Volume 93, FSTTCS'17, Complete VolumeWed, 21 Feb 2018 08:11:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8539Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8480
Front Matter, Table of Contents, Preface, Conference OrganizationTue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8480Erdös-Pósa Property of Obstructions to Interval Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8481
The duality between packing and covering problems lies at the heart of fundamental combinatorial proofs, as well as well-known algorithmic methods such as the primal-dual method for approximation and win/win-approach for parameterized analysis. The very essence of this duality is encompassed by a well-known property called the Erdös-Pósa property, which has been extensively studied for over five decades. Informally, we say that a class of graphs F admits the Erdös-Pósa property if there exists f such that for any graph G, either G has vertex-disjoint "copies" of the graphs in F, or there is a set S \subseteq V(G) of f(k) vertices that intersects all copies of the graphs in F. In the context of any graph class G, the most natural question that arises in this regard is as follows - do obstructions to G have the Erdös-Pósa property? Having this view in mind, we focus on the class of interval graphs. Structural properties of interval graphs are intensively studied, also as they lead to the design of polynomial-time algorithms for classic problems that are NP-hard on general graphs. Nevertheless, about one of the most basic properties of such graphs, namely, the Erdös-Pósa property, nothing is known. In this paper, we settle this anomaly: we prove that the family of obstructions to interval graphs - namely, the family of chordless cycles and ATs---admits the Erdös-Pósa property. Our main theorem immediately results in an algorithm to decide whether an input graph G has vertex-disjoint ATs and chordless cycles, or there exists a set of O(k^2 log k) vertices in G that hits all ATs and chordless cycles.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8481Computing the Longest Common Prefix of a Context-free Language in Polynomial Time
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8482
We present two structural results concerning the longest common prefixes of non-empty languages.First, we show that the longest common prefix of the language generated by a context-free grammar of size Nequals the longest common prefix of the same grammar where the heights of the derivation trees are bounded by4N.Second, we show that each non-empty language L has a representative subset of at most three elements which behaves like L w.r.t. the longest common prefix as well as w.r.t. longest common prefixes of L after unions orconcatenations with arbitrary other languages.From that, we conclude that the longest common prefix, and thus the longest common suffix, of a context-free language can be computed in polynomial time.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8482Rational, Recognizable, and Aperiodic Sets in the Partially Lossy Queue Monoid
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8483
Partially lossy queue monoids (or plq monoids) model the behavior of queues that can forget arbitrary parts of their content. While many decision problems on recognizable subsets in the plq monoid are decidable, most of them are undecidable if the sets are rational. In particular, in this monoid the classes of rational and recognizable subsets do not coincide. By restricting multiplication and iteration in the construction of rational sets and by allowing complementation we obtain precisely the class of recognizable sets. From these special rational expressions we can obtain an MSO logic describing the recognizable subsets. Moreover, we provide similar results for the class of aperiodic subsets in the plq monoid.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8483On the Containment Problem for Linear Sets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8484
It is well known that the containment problem (as well as theequivalence problem) for semilinear sets is log-complete at the second level of the polynomial hierarchy (where hardness even holds in dimension 1). It had been shown quite recently that already the containment problem for multi-dimensional linear sets is log-complete at the same level of the hierarchy (where hardness even holds when numbers are encoded in unary). In this paper, we show that already the containment problem for 1-dimensional linear sets (with binary encoding of the numerical input parameters) is log-hard (and therefore also log-complete) at this level. However, combining both restrictions (dimension 1 and unary encoding), the problem becomes solvable in polynomial time.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8484Automata Theory on Sliding Windows
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8485
In a recent paper we analyzed the space complexity of streaming algorithms whose goal is to decide membership of a sliding window to a fixed language. For the class of regular languages we proved a space trichotomy theorem: for every regular language the optimal space bound is either constant, logarithmic or linear. In this paper we continue this line of research: We present natural characterizations for the constant and logarithmic space classes and establish tight relationships to the concept of language growth. We also analyze the space complexity with respect to automata size and prove almost matching lower and upper bounds. Finally, we consider the decision problem whether a language given by a DFA/NFA admits a sliding window algorithm using logarithmic/constant space.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8485Finding List Homomorphisms from Bounded-treewidth Graphs to Reflexive Graphs: a Complete Complexity Characterization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8486
In the list homomorphism problem, the input consists of two graphs G and H, together with a list L(v) \subseteq V(H) for every vertex v \in V(G). The task is to find a homomorphism phi:V(G) -> V(H) respecting the lists, that is, we have that phi(v) \in L(v) for every v \in V(H) and if u and v are adjacent in G, then phi(u) and phi(v) are adjacent in H. If H is a fixed graph, then the problem is denoted LHom(H). We consider the reflexive version of the problem, where we assume that every vertexin H has a self-loop. If is known that reflexive LHom(H) is polynomial-time solvable if H is an interval graph and it is NP-complete otherwise [Feder and Hell, JCTB 1998].We explore the complexity of the problem parameterized by the treewidth tw(G) of the input graph G. If a tree decomposition of G of width tw(G) is given in the input, then the problem can be solved in time |V(H)|^{tw(G)} n^{O(1)} by naive dynamic programming. Our main result completely reveals when and by exactly how much this naive algorithm can be improved. We introduce a simple combinatorial invariant i^*(H), which is based on the existence of decompositions and incomparable sets, and show that this number should appear as the base of the exponent in the best possible running time. Specifically, we prove for every fixed non-interval graph H that* If a tree decomposition of width tw(G) is given in the input, then the problem can be solved in time i^*(H)^{tw(G)} n^{O(1)}.* Assuming the Strong Exponential-Time Hypothesis (SETH), the probem cannot be solved in time (i^*(H)-epsilon)^{tw(G)} n^{O(1)} for any epsilon>0.Thus by matching upper and lower bounds, our result exactly characterizes for every fixed H the complexity of reflexive LHom(H) parameterized by treewidth. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8486On Singleton Arc Consistency for CSPs Defined by Monotone Patterns
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8487
Singleton arc consistency is an important type of local consistency which has been recently shown to solve all constraint satisfaction problems (CSPs) over constraint languages of bounded width. We aim to characterise all classes of CSPs defined by a forbidden pattern that are solved by singleton arc consistency and closed under removing constraints. We identify five new patterns whose absence ensures solvability by singleton arc consistency, four of which are provably maximal and three of which generalise 2-SAT. Combined with simple counter-examples for other patterns, we make significant progress towards a complete classification.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8487Surjective H-Colouring over Reflexive Digraphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8488
The Surjective H-Colouring problem is to test if a given graph allows a vertex-surjective homomorphism to a fixed graph H. The complexity of this problem has been well studied for undirected (partially) reflexive graphs. We introduce endo-triviality, the property of a structure that all of its endomorphisms that do not have range of size 1 are automorphisms, as a means to obtain complexity-theoretic classifications of Surjective H-Colouring in the case of reflexive digraphs. Chen [2014] proved, in the setting of constraint satisfaction problems, that Surjective H-Colouring is NP-complete if H has the property that all of its polymorphisms are essentially unary. We give the first concrete application of his result by showing that every endo-trivial reflexive digraph H has this property. We then use the concept of endo-triviality to prove, as our main result, a dichotomy for Surjective H-Colouring when H is a reflexive tournament: if H is transitive, then Surjective H-Colouring is in NL, otherwise it is NP-complete.By combining this result with some known and new results we obtain a complexity classification for Surjective H-Colouring when H is a partially reflexive digraph of size at most 3.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8488Relations Between Greedy and Bit-Optimal LZ77 Encodings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8489
This paper investigates the size in bits of the LZ77 encoding, which is the most popular and efficient variant of the Lempel-Ziv encodings used in data compression. We prove that, for a wide natural class of variable-length encoders for LZ77 phrases, the size of the greedily constructed LZ77 encoding on constant alphabets is within a factor O(log n / log log log n) of the optimal LZ77 encoding, where n is the length of the processed string. We describe a series of examples showing that, surprisingly, this bound is tight, thus improving both the previously known upper and lower bounds. Further, we obtain a more detailed bound O(min{z, log n / log log z}), which uses the number z of phrases in the greedy LZ77 encoding as a parameter, and construct a series of examples showing that this bound is tight even for binary alphabet. We then investigate the problem on non-constant alphabets: we show that the known O(log n) bound is tight even for alphabets of logarithmic size, and provide tight bounds for some other important cases.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8489Power of Uninitialized Qubits in Shallow Quantum Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8490
We study the computational power of shallow quantum circuitswith O(log n) initialized and n^{O(1)} uninitialized ancillaryqubits, where n is the input length and the initial state ofthe uninitialized ancillary qubits is arbitrary. First, we showthat such a circuit can compute any symmetric function on n bitsthat is classically computable in polynomial time. Then, weregard such a circuit as an oracle and show that apolynomial-time classical algorithm with the oracle can estimatethe elements of any unitary matrix corresponding to aconstant-depth quantum circuit on n qubits. Since it seems unlikelythat these tasks can be done with only O(log n) initializedancillary qubits, our results give evidences that addinguninitialized ancillary qubits increases the computational powerof shallow quantum circuits with only O(log n) initializedancillary qubits. Lastly, to understand the limitations ofuninitialized ancillary qubits, we focus onnear-logarithmic-depth quantum circuits with them and showthe impossibility of computing the parity function on n bits.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8490Space-Efficient Algorithms for Longest Increasing Subsequence
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8491
Given a sequence of integers, we want to find a longest increasing subsequence of the sequence. It is known that this problem can be solved in O(n log n) time and space. Our goal in this paper is to reduce the space consumption while keeping the time complexity small. For sqrt(n) <= s <= n, we present algorithms that use O(s log n) bits and O(1/s n^2 log n) time for computing the length of a longest increasing subsequence, and O(1/s n^2 log^2 n) time for finding an actual subsequence. We also show that the time complexity of our algorithms is optimal up to polylogarithmic factors in the framework of sequential access algorithms with the prescribed amount of space.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8491Colouring Square-Free Graphs without Long Induced Paths
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8492
The Colouring problem is to decide if the vertices of a graph can be coloured with at most k colours for a given integer k such that no two adjacent vertices are coloured alike. The complexity of Colouring is fully understood for graph classes characterized by one forbidden induced subgraph H. Despite a huge body of existing work, there are still major complexity gaps if two induced subgraphs H_1 and H_2 are forbidden. We let H_1 be the s-vertex cycle C_s and H_2 be the t-vertex path P_t. We show that Colouring is polynomial-time solvable for s=4 and t<=6, which unifies several known results for Colouring on (H_1,H_2)-free graphs. Our algorithm is based on a novel decomposition theorem for (C_4,P_6)-free graphs without clique cutsets into homogeneous pairs of sets and a new framework for bounding the clique-width of a graph by the clique-width of its subgraphs induced by homogeneous pairs of sets. To apply this framework, we also need to use divide-and-conquer to bound the clique-width of subgraphs induced by homogeneous pairs of sets. To complement our positive result we also prove that Colouring is NP-complete for s=4 and t>=9, which is the first hardness result on Colouring for (C_4,P_t)-free graphs.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8492Closure of Resource-Bounded Randomness Notions Under Polynomial-Time Permutations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8493
An infinite bit sequence is called recursively random if no computable strategy betting along the sequence has unbounded capital. It is well-known that the property of recursive randomness is closed under computable permutations. We investigate analogous statements for randomness notions defined by betting strategies that are computable within resource bounds. Suppose that S is a polynomial time computable permutation of the set of strings over the unary alphabet (identified with the set of natural numbers). If the inverse of S is not polynomially bounded, it is not hard to build a polynomial time random bit sequence Z such that Z o S is not polynomial time random. So oneshould only consider permutations S satisfying the extra conditionthat the inverse is polynomially bounded. Now the closure depends on additional assumptions in complexity theory.Our first main result, Theorem 4, shows that if BPP contains a superpolynomial deterministic time class, then polynomial time randomness is not preserved by some permutation S such that in fact both S and its inverse are in P. Our second result, Theorem 11, shows that polynomial space randomness is preserved by polynomial time permutations with polynomially bounded inverse, so if P = PSPACE then polynomial time randomness is preserved. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8493On Approximating the Stationary Distribution of Time-reversible Markov Chains
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8494
Approximating the stationary probability of a state in a Markov chain through Markov chain Monte Carlo techniques is, in general, inefficient. Standard random walk approaches require tilde{O}(tau/pi(v)) operations to approximate the probability pi(v) of a state v in a chain with mixing time tau, and even the best available techniques still have complexity tilde{O}(tau^1.5 / pi(v)^0.5); and since these complexities depend inversely on pi(v), they can grow beyond any bound in the size of the chain or in its mixing time.In this paper we show that, for time-reversible Markov chains, there exists a simple randomized approximation algorithm that breaks this "small-pi(v) barrier".Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8494All Classical Adversary Methods are Equivalent for Total Functions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8495
We show that all known classical adversary lower bounds on randomized query complexity are equivalent for total functions, and are equal to the fractional block sensitivity fbs(f). That includes the Kolmogorov complexity bound of Laplante and Magniez and the earlier relational adversary bound of Aaronson. For partial functions, we show unbounded separations between fbs(f) and other adversary bounds, as well as between the relational and Kolmogorov complexity bounds.We also show that, for partial functions, fractional block sensitivity cannot give lower bounds larger than sqrt(n * bs(f)), where n is the number of variables and bs(f) is the block sensitivity. Then we exhibit a partial function f that matches this upper bound, fbs(f) = Omega(sqrt(n * bs(f))).Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8495Width of Non-deterministic Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8496
We introduce a measure called width, quantifying the amount of nondeterminism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coBüchi automata in this context. For coBüchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in this last case. We show this fact by proving that checking whether a coBüchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is PSPACE-hard.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8496Sums of Palindromes: an Approach via Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8497
Recently, Cilleruelo, Luca, & Baxter proved, for all bases b >= 5, that every natural number is the sum of at most 3 natural numbers whose base-b representation is a palindrome. However, the cases b = 2, 3, 4 were left unresolved. We prove, using a decision procedure based on automata, that every natural number is the sum of at most 4 natural numbers whose base-2 representation is a palindrome. Here the constant 4 is optimal. We obtain similar results for bases 3 and 4, thus completely resolving the problem.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8497Pumping Lemmas for Weighted Automata
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8498
We present three pumping lemmas for three classes of functions definable by fragments of weighted automata over the min-plus semiring and the semiring of natural numbers. As a corollary we show that the hierarchy of functions definable by unambiguous, finitely-ambiguous, polynomially-ambiguous weighted automata, and the full class of weighted automata is strict for the min-plus semiring.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8498Computing Hitting Set Kernels By AC^0-Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8499
Given a hypergraph H = (V,E), what is the smallest subset X of V such that e and X are not disjoint for all e in E? This problem, known as the hitting set problem, is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph H and a number k as input and output a hypergraph H' such that (1) H has a hitting set of size k if, and only if, H' has such a hitting set and (2) the size of H' depends only on k and on the maximum cardinality d of edges in H. The algorithms run in polynomial time, but are highly sequential. Recently, it has been shown that one of them can be parallelized to a certain degree: one can compute hitting set kernels in parallel time O(d) - but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in constant parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8499Lower Bounds on Black-Box Reductions of Hitting to Density Estimation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8500
Consider a deterministic algorithm that tries to find a string in an unknown set S\subseteq{0,1}^n, under the promise that S has large density. The only information that the algorithm can obtain about S is estimates of the density of S in adaptively chosen subsets of {0,1}^n, up to an additive error of mu>0. This problem is appealing as a derandomization problem, when S is the set of satisfying inputs for a circuit C:{0,1}^n->{0,1} that accepts many inputs: In this context, an algorithm as above constitutes a deterministic black-box reduction of the problem of hitting C (i.e., finding a satisfying input for C) to the problem of approximately counting the number of satisfying inputs for C on subsets of {0,1}^n. We prove tight lower bounds for this problem, demonstrating that naive approaches to solve the problem cannot be improved upon, in general. First, we show a tight trade-off between the estimation error mu and the required number of queries to solve the problem: When mu=O(log(n)/n) a polynomial number of queries suffices, and when mu>=(4log(n)/n) the required number of queries is 2^{Theta(mu \cdot n)}. Secondly, we show that the problem "resists" parallelization: Any algorithm that works in iterations, and can obtain p=p(n) density estimates "in parallel" in each iteration, still requires Omega( frac{n}{log(p)+log(1/mu)} ) iterations to solve the problem.This work extends the well-known work of Karp, Upfal, and Wigderson (1988), who studied the setting in which S is only guaranteed to be non-empty (rather than dense), and the algorithm can only probe subsets for the existence of a solution in them. In addition, our lower bound on parallel algorithms affirms a weak version of a conjecture of Motwani, Naor, and Naor (1994); we also make progress on a stronger version of their conjecture.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8500Succinct Oblivious RAM
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8501
As online storage services become increasingly common, it is important that users' private information is protected from database access pattern analyses. Oblivious RAM (ORAM) is a cryptographic primitive that enables users to perform arbitrary database accesses without revealing any information about the access pattern to the server. Previous ORAM studies focused mostly on reducing the access overhead. Consequently, the access overhead of the state-of-the-art ORAM constructions are almost at practical levels in certain application scenarios such as secure processors. However, we assume that the server space usage could become a new important issue in the coming big-data era. To enable large-scale computation in security-aware settings, it is necessary to rethink the ORAM server space cost using big-data standards.In this paper, we introduce "succinctness" as a theoretically tractable and practically relevant criterion of the ORAM server space efficiency in the big-data era. We, then, propose two succinct ORAM constructions that also exhibit state-of-the-art performance in terms of the bandwidth blowup and the user space. We also give non-asymptotic analyses and simulation results which indicate that the proposed ORAM constructions are practically effective.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8501Lossy Kernels for Connected Dominating Set on Sparse Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8502
For alpha > 1, an alpha-approximate (bi-)kernel for a problem Q is a polynomial-time algorithm that takes as input an instance (I, k) of Q and outputs an instance (I',k') (of a problem Q') of size bounded by a function of k such that, for every c >= 1, a c-approximate solution for the new instance can be turned into a (c alpha)-approximate solution of the original instance in polynomial time. This framework of lossy kernelization was recently introduced by Lokshtanov et al. We study Connected Dominating Set (and its distance-r variant) parameterized by solution size on sparse graph classes like biclique-free graphs, classes of bounded expansion, and nowhere dense classes. We prove that for every alpha > 1, Connected Dominating Set admits a polynomial-size alpha-approximate (bi-)kernel on all the aforementioned classes. Our results are in sharp contrast to the kernelization complexity of Connected Dominating Set, which is known to not admit a polynomial kernel even on 2-degenerate graphs and graphs of bounded expansion, unless NP \subseteq coNP/poly. We complement our results by the following conditional lower bound. We show that if a class C is somewhere dense and closed under taking subgraphs, then for some value of r \in N there cannot exist an alpha-approximate bi-kernel for the (Connected) Distance-r Dominating Set problem on C for any alpha > 1 (assuming the Gap Exponential Time Hypothesis). Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8502Generalizing the Kawaguchi-Kyan Bound to Stochastic Parallel Machine Scheduling
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8503
Minimizing the sum of weighted completion times on m identicalparallel machines is one of the most important and classicalscheduling problems. For the stochastic variant where processingtimes of jobs are random variables, Möhring, Schulz, and Uetz (1999) presented the first and still best known approximation result,achieving, for arbitrarily many machines, performanceratio 1+1/2(1+Delta), where Delta is an upper bound on thesquared coefficient of variation of the processing times. We proveperformance ratio 1+1/2(sqrt(2)-1)(1+Delta)for the sameunderlying algorithm---the Weighted Shortest Expected ProcessingTime (WSEPT) rule. For the special case of deterministic scheduling(i.e., Delta=0), our bound matches the tight performanceratio 1/2(1+sqrt(2)) of this algorithm (WSPT rule), derived byKawaguchi and Kyan in a 1986 landmark paper. We present severalfurther improvements for WSEPT's performance ratio, one of themrelying on a carefully refined analysis of WSPT yielding, for everyfixed number of machines m, WSPT's exact performance ratio oforder 1/2(1+sqrt(2))-O(1/m^2).Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8503Beyond JWP: A Tractable Class of Binary VCSPs via M-Convex Intersection
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8504
A binary VCSP is a general framework for the minimization problem of a function represented as the sum of unary and binary cost functions.An important line of VCSP research is to investigate what functions can be solved in polynomial time.Cooper-Zivny classified the tractability of binary VCSP instances according to the concept of "triangle,"and showed that the only interesting tractable case is the one induced by the joint winner property (JWP).Recently, Iwamasa-Murota-Zivny made a link between VCSP and discrete convex analysis, showing that a function satisfying the JWP can be transformed into a function represented as the sum of two M-convex functions, which can be minimized in polynomial time via an M-convex intersection algorithm if the value oracle of each M-convex function is given.In this paper,we give an algorithmic answer to a natural question: What binary finite-valued CSP instances can be solved in polynomial time via an M-convex intersection algorithm?We solve this problem by devising a polynomial-time algorithm for obtaining a concrete form of the representation in the representable case.Our result presents a larger tractable class of binary finite-valued CSPs, which properly contains the JWP class.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8504Lower Bounds for Combinatorial Algorithms for Boolean Matrix Multiplication
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8505
In this paper we propose models of combinatorial algorithms for the BooleanMatrix Multiplication (BMM), and prove lower bounds on computing BMM in these models. First, we give a relatively relaxed combinatorial model which is an extension of the model by Angluin (1976), and we prove that the time required by any algorithmfor the BMM is at least Omega(n^3 / 2^{O( sqrt{ log n })}). Subsequently, we propose a more general model capable of simulating the"Four Russian Algorithm". We prove a lower bound of Omega(n^{7/3} / 2^{O(sqrt{ log n })}) for the BMM under this model. We use a special class of graphs, called (r,t)-graphs, originally discovered by Rusza and Szemeredi (1978),along with randomization, to construct matrices that are hard instances for our combinatorial models.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8505String Periods in the Order-Preserving Model
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8506
The order-preserving model (op-model, in short) was introduced quite recently but has already attracted significant attention because of its applications in data analysis. We introduce several types of periods in this setting (op-periods). Then we give algorithms to compute these periods in time O(n), O(n log log n), O(n log^2 log n/log log log n), O(n log n) depending on the type of periodicity. In the most general variant the number of different periods can be as big as Omega(n^2), and a compact representation is needed. Our algorithms require novel combinatorial insight into the properties of such periods.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8506The Intersection Problem for Finite Monoids
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8507
We investigate the intersection problem for finite monoids, which asks for a given set of regular languages, represented by recognizing morphisms to finite monoids from a variety V, whether there exists a word contained in their intersection. Our main result is that the problem is PSPACE-complete if V is contained in DS and NP-complete if V is non-trivial and contained in DO. Our NP-algorithm for the case that V is contained in DO uses novel methods, based on compression techniques and combinatorial properties of DO. We also show that the problem is log-space reducible to the intersection problem for deterministic finite automata (DFA) and that a variant of the problem is log-space reducible to the membership problem for transformation monoids. In light of these reductions, our hardness results can be seen as a generalization of both a classical result by Kozen and a theorem by Beaudry, McKenzie and Thérien.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8507Upper and Lower Bounds for Dynamic Data Structures on Strings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8508
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length m and a substring of a longer text. We give both conditional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an O(m^{1/2-epsilon}) time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8508On the Tree Conjecture for the Network Creation Game
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8509
Selfish Network Creation focuses on modeling real world networks from a game-theoretic point of view. One of the classic models by Fabrikant et al.[PODC'03] is the network creation game, where agents correspond to nodes in a network which buy incident edges for the price of alpha per edge to minimize their total distance to all other nodes. The model is well-studied but still has intriguing open problems. The most famous conjectures state that the price of anarchy is constant for all alpha and that for alpha >= n all equilibrium networks are trees.We introduce a novel technique for analyzing stable networks for high edge-price alpha and employ it to improve on the best known bounds for both conjectures. In particular we show that for alpha > 4n-13 all equilibrium networks must be trees, which implies a constant price of anarchy for this range of alpha. Moreover, we also improve the constant upper bound on the price of anarchy for equilibrium trees.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8509An Improved Bound for Random Binary Search Trees with Concurrent Insertions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8510
Recently, Aspnes and Ruppert (DISC 2016) defined the following simple random experiment to determine the impact of concurrency on the performance of binary search trees: n randomly permuted keys arrive one at a time. When a new key arrives, it is first placed into a buffer of size c. Whenever the buffer is full, or when all keys have arrived, an adversary chooses one key from the buffer and inserts it into the binary search tree.The ability of the adversary to choose the next key to insert among c buffered keys, models a distributed system, where up to c processes try to insert keys concurrently. Aspnes and Ruppert showed that the expected average depth of nodes in the resulting tree is O(log(n) + c) for a comparison-based adversary, which can only take the relative order of arrived keys into account. We generalize and strengthen this result. In particular, we allow an adversary that knows the actual values of all keys that have arrived, and show that the resulting expected average node depth is D_{avg}(n) + O(c), where D_{avg}(n) = 2ln(n) - Theta(1) is the expected average node depth of a random tree obtained in the standard unbuffered version of this experiment. Extending the bound by Aspnes and Ruppert to this stronger adversary model answers one of their open questions.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8510Large Flocks of Small Birds: on the Minimal Size of Population Protocols
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8511
Population protocols are a well established model of distributed computation by mobile finite-state agents with very limited storage. A classical result establishes that population protocols compute exactly predicates definable in Presburger arithmetic. We initiate the study of the minimal amount of memory required to compute a given predicate as a function of its size. We present results on the predicates x >= n for n \in N, and more generally on the predicates corresponding to systems of linear inequalities. We show that they can be computed by protocols with O(log n) states (or, more generally, logarithmic in the coefficients of the predicate), and that, surprisingly, some families of predicates can be computed by protocols with O(log log n) states. We give essentially matching lower bounds for the class of 1-aware protocols.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8511Recursion Schemes and the WMSO+U Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8512
We study the weak MSO logic extended by the unbounding quantifier (WMSO+U), expressing the fact that there exist arbitrarily large finite sets satisfying a given property. We prove that it is decidable whether the tree generated by a given higher-order recursion scheme satisfies a given sentence of WMSO+U.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8512Small Resolution Proofs for QBF using Dependency Treewidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8513
In spite of the close connection between the evaluation of quantified Boolean formulas (QBF) and propositional satisfiability (SAT), tools and techniques which exploit structural properties of SAT instances are known to fail for QBF. This is especially true for the structural parameter treewidth, which has allowed the design of successful algorithms for SAT but cannot be straightforwardly applied to QBF since it does not take into account the interdependencies between quantified variables.In this work we introduce and develop dependency treewidth, a new structural parameter based on treewidth which allows the efficient solution of QBF instances. Dependency treewidth pushes the frontiers of tractability for QBF by overcoming the limitations of previously introduced variants of treewidth for QBF. We augment our results by developing algorithms for computing the decompositions that are required to use the parameter.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8513On Structural Parameterizations of the Bounded-Degree Vertex Deletion Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8514
We study the parameterized complexity of the Bounded-Degree Vertex Deletion problem (BDD), where the aim is to find a maximum induced subgraph whose maximum degree is below a given degree bound. Our focus lies on parameters that measure the structural properties of the input instance. We first show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. We thereby resolve the main open question stated in Betzler, Bredereck, Niedermeier and Uhlmann (2012) concerning the complexity of BDD parameterized by the feedback vertex set number. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to the decompositional parameter treecut width and a novel problem-specific parameter called the core fracture number.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8514Parameterized Approximation Schemes for Steiner Trees with Small Number of Steiner Vertices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8515
We study the Steiner Tree problem, in which a set of terminal vertices needs to be connected in the cheapest possible way in an edge-weighted graph. This problem has been extensively studied from the viewpoint of approximation and also parametrization. In particular, on one hand Steiner Tree is known to be APX-hard, and W[2]-hard on the other, if parameterized by the number of non-terminals (Steiner vertices) in the optimum solution. In contrast to this we give an efficient parameterized approximation scheme (EPAS), which circumvents both hardness results. Moreover, our methods imply the existence of a polynomial size approximate kernelization scheme (PSAKS) for the considered parameter.We further study the parameterized approximability of other variants of Steiner Tree, such as Directed Steiner Tree and Steiner Forest. For neither of these an EPAS is likely to exist for the studied parameter: for Steiner Forest an easy observation shows that the problem is APX-hard, even if the input graph contains no Steiner vertices. For Directed Steiner Tree we prove that computing a constant approximation for this parameter is W[1]-hard. Nevertheless, we show that an EPAS exists for Unweighted Directed Steiner Tree. Also we prove that there is an EPAS and a PSAKS for Steiner Forest if in addition to the number of Steiner vertices, the number of connected components of an optimal solution is considered to be a parameter.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8515Improving the Upper Bound on the Length of the Shortest Reset Word
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8516
We improve the best known upper bound on the length of the shortest reset words of synchronizing automata. The new bound is slightly better than 114 n^3 / 685 + O(n^2). The Cerny conjecture states that (n-1)^2 is an upper bound. So far, the best general upper bound was (n^3-n)/6-1 obtained by J.-E. Pin and P. Frankl in 1982. Despite a number of efforts, it remained unchanged for about 35 years.To obtain the new upper bound we utilize avoiding words.A word is avoiding for a state q if after reading the word the automaton cannot be in q. We obtain upper bounds on the length of the shortest avoiding words, and using the approach of Trahtman from 2011 combined with the well-known Frankl theorem from 1982, we improve the general upper bound on the length of the shortest reset words.For all the bounds, there exist polynomial algorithms finding a word of length not exceeding the bound. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8516Genuine Lower Bounds for QBF Expansion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8517
We propose the first general technique for proving genuine lower bounds in expansion-based QBF proof systems. We present the technique in a framework centred on natural properties of winning strategies in the 'evaluation game' interpretation of QBF semantics. As applications, we prove an exponential proof-size lower bound for a whole class of formula families, and demonstrate the power of our approach over existing methods by providing alternative short proofs of two known hardness results. We also use our technique to deduce a result with manifest practical import: in the absence of propositional hardness, formulas separating the two major QBF expansion systems must have unbounded quantifier alternations.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8517Approximating Airports and Railways
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8518
In this paper we consider the airport and railway problem (AR), which combines capacitated facility location with network design, both in the general metric and the two-dimensional Euclidean space. An instance of the airport and railway problem consists of a set of points in the corresponding metric, together with a non-negative weight for each point, and a parameter k. The points represent cities, the weights denote costs of opening an airport in the corresponding city, and the parameter k is a maximum capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting all the cities, where railways correspond to edges connecting pairs of points, and the cost of a railway is equal to the distance between the corresponding points. The network is partitioned into components, where each component contains an open airport, and spans at most k cities. For the Euclidean case, any points in the plane can be used as Steiner vertices of the network. We obtain the first bicriteria approximation algorithm for AR for the general metric case, which yields a 4-approximate solution with a resource augmentation of the airport capacity k by a factor of 2. More generally, for any parameter 0 < p <= 1 where pk is an integer we develop a (4/3)(2 + 1/p)-approximation algorithm for metric AR with a resource augmentation by a factor of 1 + p.Furthermore, we obtain the first constant factor approximation algorithm that does not resort to resource augmentation for AR in the Euclidean plane. Additionally, for the Euclidean setting we provide a quasi-polynomial time approximation scheme for the same problem with a resource augmentation by a factor of 1 + mu on the airport capacity, for any fixed mu > 0.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8518The Firing Squad Problem Revisited
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8519
In the classical firing squad problem,an unknown number of nodes represented by identical finite state machines is arranged on a line and in each time unit each node may change its state according to its neighbors' states. Initially all nodes are passive, except one specific node located at an end of the line, which issues a fire command. This command needs to be propagated to all other nodes, so that eventually all nodes simultaneously enter some designated ``firing" state.A natural extension of the firing squad problem, introduced in this paper, allows each node to postpone its participation in the squad for an arbitrary time, possibly forever, and firing is allowed only after all nodes decided to participate. This variant is highly relevant in the context of decentralized distributed computing, where processes have to coordinate for initiating various tasks simultaneously. The main goal of this paper is to study the above variant of the firing squad problem under the assumptions that the nodes are infinite state machines, and that the inter-node communication links can be changed arbitrarily in each time unit, i.e., are defined by a dynamic graph. In this setting, we study the following fundamental question: what connectivity requirements enable a solution to the firing squad problem? Our main result is an exact characterization of the dynamic graphs for which the firing squad problem can be solved. When restricted to static directed graphs, this characterization implies that the problem can be solved if and only if the graph is strongly connected. We also discuss how information on the number of nodes or on the diameter of the network, and the use of randomization, can improve the solutions to the problem. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8519Knapsack Problems for Wreath Products
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8520
In recent years, knapsack problems for (in general non-commutative) groups have attracted attention. In this paper, the knapsack problem for wreath products is studied. It turns out that decidability of knapsack is not preserved under wreath product. On the other hand, the class of knapsack-semilinear groups, where solutions sets of knapsack equations are effectively semilinear, is closed under wreath product. As a consequence, we obtain the decidability of knapsack for free solvable groups. Finally, it is shown that for every non-trivial abelian group G, knapsack (as well as the related subset sum problem)for the wreath product G \wr Z is NP-complete.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8520Nonuniform Reductions and NP-Completeness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8521
Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP0completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform complessness in NP.Under various hypotheses, we obtain the following separations:1. There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice.2. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity is stronger than a polynomial number of queries.3. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it more powerful than a nonuniform reduction with fixed polynomial advice.4. There is a set complete for NP under nonuniform many-one reductions with polynomial ad- vice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing.We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, the same statement for truth-table reductions and truth-table completeness also holds.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8521On Low for Speed Oracles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8522
Relativizing computations of Turing machines to an oracle is a central concept in the theory of computation, both in complexity theory and in computability theory(!). Inspired by lowness notions from computability theory, Allender introduced the concept of "low for speed" oracles. An oracle A is low for speed if relativizing to A has essentially no effect on computational complexity, meaning that if a decidable language can be decided in time f(n) with access to oracle A, then it can be decided in time poly(f(n)) without any oracle. The existence of non-computable such A's was later proven by Bayer and Slaman, who even constructed a computably enumerable one, and exhibited a number of properties of these oracles as well as interesting connections with computability theory. In this paper, we pursue this line of research, answering the questions left by Bayer and Slaman and give further evidence that the structure of the class of low for speed oracles is a very rich one. Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8522Small-depth Multilinear Formula Lower Bounds for Iterated Matrix Multiplication, with Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8523
The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within P. In this paper, we study the algebraic formula complexity of multiplying d many 2x2 matrices, denoted IMM_d, and show that the well-known divide-and-conquer algorithm cannot be significantly improved at any depth, as long as the formulas are multilinear.Formally, for each depth Delta <= log d, we show that any product-depth Delta multilinear formula for IMM_d must have size exp(Omega(Delta d^{1/Delta})). It also follows from this that any multilinear circuit of product-depth Delta for the same polynomial of the above form must have a size of exp(Omega(d^{1/Delta})). In particular, any polynomial-sized multilinear formula for IMM_d must have depth Omega(log d), and any polynomial-sized multilinear circuit for IMM_d must have depth Omega(log d/log log d). Both these bounds are tight up to constant factors.Our lower bound has the following consequences for multilinear formula complexity. Depth-reduction: A well-known result of Brent (JACM 1974) implies that any formula of size s can be converted to one of size s^{O(1)} and depth O(log s); further, this reduction continues to hold for multilinear formulas. On the other hand, our lower bound implies that any depth-reduction in the multilinear setting cannot reduce the depth to o(log s) without a superpolynomial blow-up in size.Separations from general formulas: Shpilka and Yehudayoff (FnTTCS 2010) asked whether general formulas can be more efficient than multilinear formulas for computing multilinear polynomials. Our result, along with a non-trivial upper bound for IMM_d implied by a result of Gupta, Kamath, Kayal and Saptharishi (SICOMP 2016), shows that for any size s and product-depth Delta = o(log s), general formulas of size s and product-depth Delta cannot be converted to multilinear formulas of size s^{O(1)} and product-depth Delta, when the underlying field has characteristic zero.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8523Efficient Oracles and Routing Schemes for Replacement Paths
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8524
Real life graphs and networks are prone to failure of nodes (vertices) and links (edges). In particular, for a pair of nodes s and t and a failing edge e in an n-vertex unweighted graph G=(V(G),E(G)), the replacement path pi_{G-e}(s,t) is a shortest s-t path that avoids e. In this paper we present several efficient constructions that, for every (s,t) \in S x T, where S, T \subseteq V(G), and every e \in E(G), maintain the collection of all pi_{G-e}(s,t), either implicitly (i.e., through compact data structures a.k.a. distance sensitivity oracles (DSO)), or explicitly (i.e., through sparse subgraphs a.k.a. fault-tolerant preservers (FTP)).More precisely, we provide the following results:(1) DSO:For every S,T \subseteq V(G), we construct a DSO for maintaining S x T distances under single edge (or vertex) faults. This DSO has size tilde{O}(n\sqrt{|S||T|}) and query time ofO(\sqrt{|S||T|}). At the expense of having quasi-polynomial query time,the size of the oracle can be improved to tilde{O}(n|S|+|T|\sqrt{|S|n}), which is optimal for |T| = Omega(sqrt{n|S|}). When |T| = Omega(n^frac{3}{4} |S|^frac{1}{4}), the construction can be further refined in order to get a polynomial query time. We also consider the approximate additive setting, and show a family of DSOs that exhibits a tradeoff between the additive stretch and the size of the oracle. Finally, for the meaningful single-source case, the above result is complemented by a lower bound conditioned on the Set-Intersection conjecture. This lower bound establishes a separation between the oracle and the subgraph settings.(2) FTP:We show the construction of a path-reporting DSO of size tilde{O}(n^{4/3}(|S||T|)^{1/3}) reporting pi_{G-e}(s,t) in O(|pi_{G-e}(s,t)|+(n|S||T|)^{1/3}) time. Such a DSO can be transformed into a FTP having the same size, and moreover it can be elaborated in order to make it optimal (up to a poly-logarithmic factor) both in space and query time for the special case in which T=V(G). Our FTP improves over previous constructions when |T|=O(sqrt{|S|n}) (up to inverse poly-logarithmic factors).(3) Routing and Labeling Schemes:For the well-studied single-source setting, we present a novel routing scheme, that allows to route messages on pi_{G-e}(s,t) by using edge labels and routing tables of size tilde{O}(\sqrt{n}), and a header message of poly-logarithmic size. We also present a labeling scheme for the setting which is optimal in space up to constant factors.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8524On the Power of Tree-Depth for Fully Polynomial FPT Algorithms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8525
There are many classical problems in P whose time complexities have not been improved over the past decades.Recent studies of "Hardness in P" have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions.To bypass this difficulty, the concept of "FPT inside P" has been introduced.For a problem with the current best time complexity O(n^c), the goal is to design an algorithm running in k^{O(1)}n^{c'} time for a parameter k and a constant c'Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8525Optimal Dislocation with Persistent Errors in Subquadratic Time
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8526
We study the problem of sorting N elements in presence of persistent errors in comparisons: In this classical model, each comparison between two elements is wrong independently with some probability p, but repeating the same comparison gives always the same result. The best known algorithms for this problem have running time O(N^2) and achieve an optimal maximum dislocation of O(log N) for constant error probability. Note that no algorithm can achieve dislocation o(log N), regardless of its running time.In this work we present the first subquadratic time algorithm with optimal maximum dislocation: Our algorithm runs in tilde{O}(N^{3/2}) time and guarantees O(log N) maximum dislocation with high probability. Though the first version of our algorithm is randomized, it can be derandomized by extracting the necessary random bits from the results of the comparisons (errors).Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8526The Relation between Polynomial Calculus, Sherali-Adams, and Sum-of-Squares Proofs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8527
We relate different approaches for proving the unsatisfiability of a system of real polynomial equations over Boolean variables. On the one hand, there are the static proof systems Sherali-Adams andsum-of-squares (a.k.a. Lasserre), which are based on linear andsemi-definite programming relaxations. On the other hand, weconsider polynomial calculus, which is a dynamic algebraic proofsystem that models Gröbner basis computations. Our first result is that sum-of-squares simulates polynomialcalculus: any polynomial calculus refutation of degree d can betransformed into a sum-of-squares refutation of degree 2d and onlypolynomial increase in size.In contrast, our second result shows that this is not the case for Sherali-Adams: there are systems of polynomial equations that have polynomial calculus refutations of degree 3 and polynomial size, but require Sherali-Adams refutations of large degree and exponential size.A corollary of our first result is that the proof systemsPositivstellensatz and Positivstellensatz Calculus, which have been separated over non-Boolean polynomials, simulateeach other in the presence of Boolean axioms.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8527Property Testing for Bounded Degree Databases
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8528
Aiming at extremely efficient algorithms for big data sets, we introduce property testing of relational databases of bounded degree. Our model generalises the bounded degree model for graphs (Goldreich and Ron, STOC 1997). We prove that in this model, if the databases have bounded tree-width, then every query definable in monadic second-order logic with modulo counting is testable with a constant number of oracle queries and polylogarithmic running time. This is the first logical meta-theorem in property testing of sparse models. Furthermore, we discuss conditions for the existence of uniform and non-uniform testers.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8528Communicating Finite-State Machines and Two-Variable Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8529
Communicating finite-state machines are a fundamental, well-studied model of finite-state processes that communicate via unbounded first-in first-out channels. We show that they are expressively equivalent to existential MSO logic with two first-order variables and the order relation.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8529Parameterized (Approximate) Defective Coloring
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8530
In Defective Coloring we are given a graph G=(V,E) and two integers chi_d,Delta^* and are asked if we can partition V into chi_d color classes, so that each class induces a graph of maximum degree Delta^*. We investigate the complexity of this generalization of Coloring with respect to several well-studied graph parameters, and show that the problem is W-hard parameterized by treewidth, pathwidth, tree-depth, or feedback vertex set, if chi_d=2. As expected, this hardness can be extended to larger values of chi_d for most of these parameters, with one surprising exception: we show that the problem is FPT parameterized by feedback vertex set for any chi_d != 2, and hence 2-coloring is the only hard case for this parameter. In addition to the above, we give an ETH-based lower bound for treewidth and pathwidth, showing that no algorithm can solve theproblem in n^{o(pw)}, essentially matching the complexity of an algorithm obtained with standard techniques. We complement these results by considering the problem's approximability and show that, with respect to Delta^*, the problem admits an algorithm which for any epsilon>0 runs in time (tw/epsilon)^{O(tw)} and returns a solution with exactly the desired number of colors that approximates the optimal Delta^* within (1+epsilon). We also give a (tw)^{O(tw)} algorithm which achieves the desired Delta^* exactly while 2-approximating the minimum value of chi_d. We show that this is close to optimal, by establishing that no FPT algorithm can (under standard assumptions) achieve a better than 3/2-approximation to chi_d, even when an extra constant additive error is also allowed.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8530Approximation Algorithms for Scheduling with Resource and Precedence Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8531
We study non-preemptive scheduling problems on identical parallel machines and uniformly related machines under both resource constraints and general precedence constraints between jobs. Our first result is an O(logn)-approximation algorithm for the objective of minimizing the makespan on parallel identical machines under resource and general precedence constraints. We then use this result as a subroutine to obtain an O(logn)-approximation algorithm for themore general objective of minimizing the total weighted completion time on parallel identical machines under both constraints. Finally, we present an O(logm logn)-approximation algorithm for scheduling under these constraints on uniformly related machines. We show that these results can all be generalized to include the case where each job has a release time. This is the first upper bound on the approximability of this class of scheduling problems where both resource and general precedence constraints must be satisfied simultaneously.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8531Dependences in Strategy Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8532
Strategy Logic (SL) is a very expressive temporal logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express LTL properties of the resulting plays. Such a powerful framework has two drawbacks: First, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula, revisiting the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014]. We explain why elementary dependences, as defined by Mogavero et al., do not exactly capture the intended concept of behavioral strategies. We address this discrepancy by introducing timeline dependences, and exhibit a large fragment of SL for which model checking can be performed in 2-EXPTIME under this new semantics.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8532Solving the Rubik's Cube Optimally is NP-complete
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8533
In this paper, we prove that optimally solving an n x n x n Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an n x n x n Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square--an n x n x 1 generalization of the Rubik's Cube--and then proceed with a similar but more complicated proof for the Rubik's Cube case. Our results hold both when the goal is make the sides monochromatic and when the goal is to put each sticker into a specific location.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8533A Unified Polynomial-Time Algorithm for Feedback Vertex Set on Graphs of Bounded Mim-Width
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8534
We give a first polynomial-time algorithm for (Weighted) Feedback Vertex Set on graphs of bounded maximum induced matching width (mim-width). Explicitly, given a branch decomposition of mim-width w, we give an n^{O(w)}-time algorithm that solves Feedback Vertex Set. This provides a unified algorithm for many well-known classes, such as Interval graphs and Permutation graphs, and furthermore, it gives the first polynomial-time algorithms for other classes of bounded mim-width, such as Circular Permutation and Circular k-Trapezoid graphs for fixed k. In all these classes the decomposition is computable in polynomial time, as shown by Belmonte and Vatshelle [Theor. Comput. Sci. 2013].We show that powers of graphs of tree-width w-1 or path-width w and powers of graphs of clique-width w have mim-width at most w. These results extensively provide new classes of bounded mim-width. We prove a slight strengthening of the first statement which implies that, surprisingly, Leaf Power graphs which are of importance in the field of phylogenetic studies have mim-width at most 1. Given a tree decomposition of width w-1, a path decomposition of width w, or a clique-width w-expression of a graph G, one can for any value of k find a mim-width decomposition of its k-power in polynomial time, and apply our algorithm to solve Feedback Vertex Set on the k-power in time n^{O(w)}.In contrast to Feedback Vertex Set, we show that Hamiltonian Cycle is NP-complete even on graphs of linear mim-width 1, which further hints at the expressive power of the mim-width parameter.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8534Recursive Combinatorial Structures: Enumeration, Probabilistic Analysis and Random Generation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8535
In a probabilistic context, the main data structures of computer science are viewed as random combinatorial objects. Analytic Combinatorics, as described in the book by Flajolet and Sedgewick, provides a set of high-level tools for their probabilistic analysis. Recursive combinatorial definitions lead to generating function equations from which efficient algorithms can be designed for enumeration, random generation and, to some extent, asymptotic analysis. With a focus on random generation, this tutorial first covers the basics of Analytic Combinatorics and then describes the idea of Boltzmann sampling and its realisation.The tutorial addresses a broad TCS audience and no particular pre-knowledge on analytic combinatorics is expected.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8535Lower Bound Techniques for QBF Proof Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8536
How do we prove that a false QBF is inded false? How big a proof is needed? The special case when all quantifiers are existential is the well-studied setting of propositional proof complexity. Expectedly, universal quantifiers change the game significantly. Several proof systems have been designed in the last couple of decades to handle QBFs. Lower bound paradigms from propositional proof complexity cannot always be extended - in most cases feasible interpolation and consequent transfer of circuit lower bounds works, but obtaining lower bounds on size by providing lower bounds on width fails dramatically. A new paradigm with no analogue in the propositional world has emerged in the form of strategy extraction, allowing for transfer of circuit lower bounds, as well as obtaining independentgenuine QBF lower bounds based on a semantic cost measure.This talk will provide a broad overview of some of these developments.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8536The Open Shop Scheduling Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8537
We discuss the computational complexity, the approximability, the algorithmics and the combinatorics of the open shop scheduling problem. We summarize the most important results from the literature and explain their main ideas, we sketch the most beautiful proofs, and we also list a number of open problems.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8537On the Positive Calculus of Relations with Transitive Closure
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8538
Binary relations are such a basic object that they appear in manyplaces in mathematics and computer science. For instance, whendealing with graphs, program semantics, or termination guarantees,binary relations are always used at some point.In this survey, we focus on the relations themselves, and weconsider algebraic and algorithmic questions. On the algebraic side, we want to understand and characterise the laws governing the behaviour of the following standard operations on relations: union, intersection, composition, converse, and reflexive-transitive closure. On the algorithmic side, we look for decision procedures for equality or inequality of relations.After having formally defined the calculus of relations, we recallthe existing results about two well-studied fragments of particular importance: Kleene algebras and allegories. Unifying those fragments yields a decidable theory whose axiomatisability remains an open problem.Tue, 20 Feb 2018 15:04:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8538