Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082018-06-06http://www.dagstuhl.de/dagpub-rss.phpFront Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10507
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10507A Fresh Look at the lambda-Calculus (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10508
The (untyped) lambda-calculus is almost 90 years old. And yet - we argue here - its study is far from being over. The paper is a bird's eye view of the questions the author worked on in the last few years: how to measure the complexity of lambda-terms, how to decompose their evaluation, how to implement it, and how all this varies according to the evaluation strategy. The paper aims at inducing a new way of looking at an old topic, focussing on high-level issues and perspectives.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10508A Linear Logical Framework in Hybrid (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10509
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10509Extending Maximal Completion (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10510
Maximal completion (Klein and Hirokawa 2011) is an elegantly simple yet powerful variant of Knuth-Bendix completion. This paper extends the approach to ordered completion and theorem proving as well as normalized completion. An implementation of the different procedures is described, and its practicality is demonstrated by various examples.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10510Some Semantic Issues in Probabilistic Programming Languages (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10511
This is a slightly extended abstract of my talk at FSCD'19 about probabilistic programming and a few semantic issues on it. The main purpose of this abstract is to provide keywords and references on the work mentioned in my talk, and help interested audience to do follow-up study.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10511Bicategories in Univalent Foundations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10512
We develop bicategory theory in univalent foundations. Guided by the notion of univalence for (1-)categories studied by Ahrens, Kapulkin, and Shulman, we define and study univalent bicategories. To construct examples of those, we develop the notion of "displayed bicategories", an analog of displayed 1-categories introduced by Ahrens and Lumsdaine. Displayed bicategories allow us to construct univalent bicategories in a modular fashion. To demonstrate the applicability of this notion, we prove several bicategories are univalent. Among these are the bicategory of univalent categories with families and the bicategory of pseudofunctors between univalent bicategories. Our work is formalized in the UniMath library of univalent mathematics.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10512Modular Specification of Monads Through Higher-Order Presentations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10513
In their work on second-order equational logic, Fiore and Hur have studied presentations of simply typed languages by generating binding constructions and equations among them. To each pair consisting of a binding signature and a set of equations, they associate a category of "models", and they give a monadicity result which implies that this category has an initial object, which is the language presented by the pair.In the present work, we propose, for the untyped setting, a variant of their approach where monads and modules over them are the central notions. More precisely, we study, for monads over sets, presentations by generating ("higher-order") operations and equations among them. We consider a notion of 2-signature which allows to specify a monad with a family of binding operations subject to a family of equations, as is the case for the paradigmatic example of the lambda calculus, specified by its two standard constructions (application and abstraction) subject to beta- and eta-equalities. Such a 2-signature is hence a pair (Sigma,E) of a binding signature Sigma and a family E of equations for Sigma. This notion of 2-signature has been introduced earlier by Ahrens in a slightly different context. We associate, to each 2-signature (Sigma,E), a category of "models of (Sigma,E)"; and we say that a 2-signature is "effective" if this category has an initial object; the monad underlying this (essentially unique) object is the "monad specified by the 2-signature". Not every 2-signature is effective; we identify a class of 2-signatures, which we call "algebraic", that are effective.Importantly, our 2-signatures together with their models enjoy "modularity": when we glue (algebraic) 2-signatures together, their initial models are glued accordingly.We provide a computer formalization for our main results.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10513Towards the Average-Case Analysis of Substitution Resolution in Lambda-Calculus
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10514
Substitution resolution supports the computational character of beta-reduction, complementing its execution with a capture-avoiding exchange of terms for bound variables. Alas, the meta-level definition of substitution, masking a non-trivial computation, turns beta-reduction into an atomic rewriting rule, despite its varying operational complexity. In the current paper we propose a somewhat indirect average-case analysis of substitution resolution in the classic lambda-calculus, based on the quantitative analysis of substitution in lambda-upsilon, an extension of lambda-calculus internalising the upsilon-calculus of explicit substitutions. Within this framework, we show that for any fixed n >= 0, the probability that a uniformly random, conditioned on size, lambda-upsilon-term upsilon-normalises in n normal-order (i.e. leftmost-outermost) reduction steps tends to a computable limit as the term size tends to infinity. For that purpose, we establish an effective hierarchy (G_n)_n of regular tree grammars partitioning upsilon-normalisable terms into classes of terms normalising in n normal-order rewriting steps. The main technical ingredient in our construction is an inductive approach to the construction of G_{n+1} out of G_n based, in turn, on the algorithmic construction of finite intersection partitions, inspired by Robinson's unification algorithm. Finally, we briefly discuss applications of our approach to other term rewriting systems, focusing on two closely related formalisms, i.e. the full lambda-upsilon-calculus and combinatory logic.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10514Deriving an Abstract Machine for Strong Call by Need
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10515
Strong call by need is a reduction strategy for computing strong normal forms in the lambda calculus, where terms are fully normalized inside the bodies of lambda abstractions and open terms are allowed. As typical for a call-by-need strategy, the arguments of a function call are evaluated at most once, only when they are needed. This strategy has been introduced recently by Balabonski et al., who proved it complete with respect to full beta-reduction and conservative over weak call by need.We show a novel reduction semantics and the first abstract machine for the strong call-by-need strategy. The reduction semantics incorporates syntactic distinction between strict and non-strict let constructs and is geared towards an efficient implementation. It has been defined within the framework of generalized refocusing, i.e., a generic method that allows to go from a reduction semantics instrumented with context kinds to the corresponding abstract machine; the machine is thus correct by construction. The format of the semantics that we use makes it explicit that strong call by need is an example of a hybrid strategy with an infinite number of substrategies.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10515Dependency Pairs Termination in Dependent Type Theory Modulo Rewriting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10516
Dependency pairs are a key concept at the core of modern automated termination provers for first-order term rewriting systems. In this paper, we introduce an extension of this technique for a large class of dependently-typed higher-order rewriting systems. This extends previous results by Wahlstedt on the one hand and the first author on the other hand to strong normalization and non-orthogonal rewriting systems. This new criterion is implemented in the type-checker Dedukti.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10516A Generic Framework for Higher-Order Generalizations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10517
We consider a generic framework for anti-unification of simply typed lambda terms. It helps to compute generalizations which contain maximally common top part of the input expressions, without nesting generalization variables. The rules of the corresponding anti-unification algorithm are formulated, and their soundness and termination are proved. The algorithm depends on a parameter which decides how to choose terms under generalization variables. Changing the particular values of the parameter, we obtained four new unitary variants of higher-order anti-unification and also showed how the already known pattern generalization fits into the schema.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10517Homotopy Canonicity for Cubical Type Theory
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10518
Cubical type theory provides a constructive justification of homotopy type theory and satisfies canonicity: every natural number is convertible to a numeral. A crucial ingredient of cubical type theory is a path lifting operation which is explained computationally by induction on the type involving several non-canonical choices. In this paper we show by a sconing argument that if we remove these equations for the path lifting operation from the system, we still retain homotopy canonicity: every natural number is path equal to a numeral.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10518Polymorphic Higher-Order Termination
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10519
We generalise the termination method of higher-order polynomial interpretations to a setting with impredicative polymorphism. Instead of using weakly monotonic functionals, we interpret terms in a suitable extension of System F_omega. This enables a direct interpretation of rewrite rules which make essential use of impredicative polymorphism. In addition, our generalisation eases the applicability of the method in the non-polymorphic setting by allowing for the encoding of inductive data types. As an illustration of the potential of our method, we prove termination of a substantial fragment of full intuitionistic second-order propositional logic with permutative conversions.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10519On the Taylor Expansion of Probabilistic lambda-terms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10520
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10520Proof Normalisation in a Logic Identifying Isomorphic Propositions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10521
We define a fragment of propositional logic where isomorphic propositions, such as A wedge B and B wedge A, or A ==> (B wedge C) and (A ==> B) wedge (A ==> C) are identified. We define System I, a proof language for this logic, and prove its normalisation and consistency.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10521lambda!-calculus, Intersection Types, and Involutions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10522
Abramsky's affine combinatory algebras are models of affine combinatory logic, which refines standard combinatory logic in the direction of Linear Logic. Abramsky introduced various universal models of computation based on affine combinatory algebras, consisting of partial involutions over a suitable formal language {of moves}, in order to discuss reversible computation in a Geometry of Interaction setting. We investigate partial involutions from the point of view of the model theory of lambda!-calculus. The latter is a refinement of the standard lambda-calculus, corresponding to affine combinatory logic. We introduce intersection type systems for the lambda!-calculus, by extending standard intersection types with a !_u-operator. These induce affine combinatory algebras, and, via suitable quotients, models of the lambda!-calculus. In particular, we introduce an intersection type system for assigning principal types to lambda!-terms, and we state a correspondence between the partial involution interpreting a combinator and the principal type of the corresponding lambda!-term. This analogy allows for explaining as unification between principal types the somewhat awkward linear application of involutions arising from Geometry of Interaction.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10522Template Games, Simple Games, and Day Convolution
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10523
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10523Differentials and Distances in Probabilistic Coherence Spaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10524
In probabilistic coherence spaces, a denotational model of probabilistic functional languages, morphisms are analytic and therefore smooth. We explore two related applications of the corresponding derivatives. First we show how derivatives allow to compute the expectation of execution time in the weak head reduction of probabilistic PCF (pPCF). Next we apply a general notion of "local" differential of morphisms to the proof of a Lipschitz property of these morphisms allowing in turn to relate the observational distance on pPCF terms to a distance the model is naturally equipped with. This suggests that extending probabilistic programming languages with derivatives, in the spirit of the differential lambda-calculus, could be quite meaningful.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10524Modal Embeddings and Calling Paradigms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10525
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10525Probabilistic Rewriting: Normalization, Termination, and Unique Normal Forms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10526
While a mature body of work supports the study of rewriting systems, abstract tools for Probabilistic Rewriting are still limited. We study in this setting questions such as uniqueness of the result (unique limit distribution) and normalizing strategies (is there a strategy to find a result with greatest probability?). The goal is to have tools to analyse the operational properties of probabilistic calculi (such as probabilistic lambda-calculi) whose evaluation is also non-deterministic, in the sense that different reductions are possible.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10526A Linear-Logical Reconstruction of Intuitionistic Modal Logic S4
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10527
We propose a modal linear logic to reformulate intuitionistic modal logic S4 (IS4) in terms of linear logic, establishing an S4-version of Girard translation from IS4 to it. While the Girard translation from intuitionistic logic to linear logic is well-known, its extension to modal logic is non-trivial since a naive combination of the S4 modality and the exponential modality causes an undesirable interaction between the two modalities. To solve the problem, we introduce an extension of intuitionistic multiplicative exponential linear logic with a modality combining the S4 modality and the exponential modality, and show that it admits a sound translation from IS4. Through the Curry-Howard correspondence we further obtain a Geometry of Interaction Machine semantics of the modal lambda-calculus by Pfenning and Davies for staged computation.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10527Sparse Tiling Through Overlap Closures for Termination of String Rewriting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10528
A strictly locally testable language is characterized by its set of admissible factors, prefixes and suffixes, called tiles. We over-approximate reachability sets in string rewriting by languages defined by sparse sets of tiles, containing only those that are reachable in derivations. Using the partial algebra defined by a tiling for semantic labeling, we obtain a transformational method for proving local termination. These algebras can be represented efficiently as finite automata of a certain shape. Using a known result on forward closures, and a new characterisation of overlap closures, we can automatically prove termination and relative termination, respectively. We report on experiments showing the strength of the method.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10528Proof Nets for First-Order Additive Linear Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10529
We present canonical proof nets for first-order additive linear logic, the fragment of linear logic with sum, product, and first-order universal and existential quantification. We present two versions of our proof nets. One, witness nets, retains explicit witnessing information to existential quantification. For the other, unification nets, this information is absent but can be reconstructed through unification. Unification nets embody a central contribution of the paper: first-order witness information can be left implicit, and reconstructed as needed. Witness nets are canonical for first-order additive sequent calculus. Unification nets in addition factor out any inessential choice for existential witnesses. Both notions of proof net are defined through coalescence, an additive counterpart to multiplicative contractibility, and for witness nets an additional geometric correctness criterion is provided. Both capture sequent calculus cut-elimination as a one-step global composition operation.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10529The Sub-Additives: A Proof Theory for Probabilistic Choice extending Linear Logic
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10530
Probabilistic choice, where each branch of a choice is weighted according to a probability distribution, is an established approach for modelling processes, quantifying uncertainty in the environment and other sources of randomness. This paper uncovers new insight showing probabilistic choice has a purely logical interpretation as an operator in an extension of linear logic. By forbidding projection and injection, we reveal additive operators between the standard with and plus operators of linear logic. We call these operators the sub-additives. The attention of the reader is drawn to two sub-additive operators: the first being sound with respect to probabilistic choice; while the second arises due to the fact that probabilistic choice cannot be self-dual, hence has a de Morgan dual counterpart. The proof theoretic justification for the sub-additives is a cut elimination result, employing a technique called decomposition. The justification from the perspective of modelling probabilistic concurrent processes is that implication is sound with respect to established notions of probabilistic refinement, and is fully compositional.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10530A Lower Bound of the Number of Rewrite Rules Obtained by Homological Methods
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10531
It is well-known that some equational theories such as groups or boolean algebras can be defined by fewer equational axioms than the original axioms. However, it is not easy to determine if a given set of axioms is the smallest or not. Malbos and Mimram investigated a general method to find a lower bound of the cardinality of the set of equational axioms (or rewrite rules) that is equivalent to a given equational theory (or term rewriting systems), using homological algebra. Their method is an analog of Squier's homology theory on string rewriting systems. In this paper, we develop the homology theory for term rewriting systems more and provide a better lower bound under a stronger notion of equivalence than their equivalence. The author also implemented a program to compute the lower bounds.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10531Gluing for Type Theory
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10532
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10532The Discriminating Power of the Let-In Operator in the Lazy Call-by-Name Probabilistic lambda-Calculus
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10533
We consider the notion of probabilistic applicative bisimilarity (PAB), recently introduced as a behavioural equivalence over a probabilistic extension of the untyped lambda-calculus. Alberti, Dal Lago and Sangiorgi have shown that PAB is not fully abstract with respect to the context equivalence induced by the lazy call-by-name evaluation strategy. We prove that extending this calculus with a let-in operator allows for achieving the full abstraction. In particular, we recall Larsen and Skou's testing language, which is known to correspond with PAB, and we prove that every test is representable by a context of our calculus.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10533Hilbert's Tenth Problem in Coq
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10534
We formalise the undecidability of solvability of Diophantine equations, i.e. polynomial equations over natural numbers, in Coq's constructive type theory. To do so, we give the first full mechanisation of the Davis-Putnam-Robinson-Matiyasevich theorem, stating that every recursively enumerable problem - in our case by a Minsky machine - is Diophantine. We obtain an elegant and comprehensible proof by using a synthetic approach to computability and by introducing Conway's FRACTRAN language as intermediate layer.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10534The Delta-calculus: Syntax and Types
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10535
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10535Pointers in Recursion: Exploring the Tropics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10536
We translate the usual class of partial/primitive recursive functions to a pointer recursion framework, accessing actual input values via a pointer reading unit-cost function. These pointer recursive functions classes are proven equivalent to the usual partial/primitive recursive functions. Complexity-wise, this framework captures in a streamlined way most of the relevant sub-polynomial classes. Pointer recursion with the safe/normal tiering discipline of Bellantoni and Cook corresponds to polylogtime computation. We introduce a new, non-size increasing tiering discipline, called tropical tiering. Tropical tiering and pointer recursion, used with some of the most common recursion schemes, capture the classes logspace, logspace/polylogtime, ptime, and NC. Finally, in a fashion reminiscent of the safe recursive functions, tropical tiering is expressed directly in the syntax of the function algebras, yielding the tropical recursive function algebras.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10536Typed Equivalence of Effect Handlers and Delimited Control
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10537
It is folklore that effect handlers and delimited control operators are closely related: recently, this relationship has been proved in an untyped setting for deep handlers and the shift_0 delimited control operator. We positively resolve the conjecture that in an appropriately polymorphic type system this relationship can be extended to the level of types, by identifying the necessary forms of polymorphism, thus extending the definability result to the typed context. In the process, we identify a novel and potentially interesting type system feature for delimited control operators. Moreover, we extend these results to substantiate the folklore connection between shallow handlers and control_0 flavour of delimited control, both in an untyped and typed settings.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10537Cubical Syntax for Reflection-Free Extensional Equality
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10538
Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10538Guarded Recursion in Agda via Sized Types
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10539
In type theory, programming and reasoning with possibly non-terminating programs and potentially infinite objects is achieved using coinductive types. Recursively defined programs of these types need to be productive to guarantee the consistency of the type system. Proof assistants such as Agda and Coq traditionally employ strict syntactic productivity checks, which often make programming with coinductive types convoluted. One way to overcome this issue is by encoding productivity at the level of types so that the type system forbids the implementation of non-productive corecursive programs. In this paper we compare two different approaches to type-based productivity: guarded recursion and sized types. More specifically, we show how to simulate guarded recursion in Agda using sized types. We formalize the syntax of a simple type theory for guarded recursion, which is a variant of Atkey and McBride's calculus for productive coprogramming. Then we give a denotational semantics using presheaves over the preorder of sizes. Sized types are fundamentally used to interpret the characteristic features of guarded recursion, notably the fixpoint combinator.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10539Sequence Types for Hereditary Permutators
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10540
The invertible terms in Scott's model D_infty are known as the hereditary permutators. Equivalently, they are terms which are invertible up to beta eta-conversion with respect to the composition of the lambda-terms. Finding a type-theoretic characterization to the set of hereditary permutators was problem # 20 of TLCA list of problems. In 2008, Tatsuta proved that this was not possible with an inductive type system. Building on previous work, we use an infinitary intersection type system based on sequences (i.e., families of types indexed by integers) to characterize hereditary permutators with a unique type. This gives a positive answer to the problem in the coinductive case.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10540Model Checking Strategy-Controlled Rewriting Systems (System Description)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10541
Strategies are widespread in Computer Science. In the domain of reduction and rewriting systems, strategies are studied as recipes to restrict and control reduction steps and rule applications, which are intimately local, in a derivation-global sense. This idea has been exploited by various tools and rewriting-based specification languages, where strategies are an additional specification layer. Systems so described need to be analyzed too. This article discusses model checking of systems controlled by strategies and presents a working strategy-aware LTL model checker for the Maude specification language, based on rewriting logic, and its strategy language.Fri, 21 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10541Schloss Dagstuhl - Jahresbericht / Annual Report 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10574
Wed, 19 Jun 2019 15:59:08 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10574Computational Methods for Melody and Voice Processing in Music Recordings (Dagstuhl Seminar 19052)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10573
In our daily lives, we are constantly surrounded by music, and we are deeply influenced by music. Making music together can create strong ties between people, while fostering communication and creativity. This is demonstrated, for example, by the large community of singers active in choirs or by the fact that music constitutes an important part of our cultural heritage. The availability of music in digital formats and its distribution over the world wide web has changed the way we consume, create, enjoy, explore, and interact with music. To cope with the increasing amount of digital music, one requires computational methods and tools that allow users to find, organize, analyze, and interact with music--topics that are central to the research field known as \emph{Music Information Retrieval} (MIR).The Dagstuhl Seminar 19052 was devoted to a branch of MIR that is of particular importance: processing melodic voices (with a focus on singing voices) using computational methods. It is often the melody, a specific succession of musical tones, which constitutes the leading element in a piece of music. In the seminar we discussed how to detect, extract, and analyze melodic voices as they occur in recorded performances of a piece of music.Gathering researchers from different fields, we critically reviewed the state of the art of computational approaches to various MIR tasks related to melody processing including pitch estimation, source separation, singing voice analysis and synthesis, and performance analysis (timbre, intonation, expression). This triggered interdisciplinary discussions that leveraged insights from fields as disparate as audio processing, machine learning, music perception, music theory, and information retrieval. In particular, we discussed current challenges in academic and industrial research in view of the recent advances in deep learning and data-driven models. Furthermore, we explored novel applications of these technologies in music and multimedia retrieval, content creation, musicology, education, and human-computer interaction.In this report, we give an overview of the various contributions and results of the seminar. We start with an executive summary, which describes the main topics, goals, and group activities. Then, we present a more detailed overview of the participants' contributions (listed alphabetically by their last names) as well as of the ideas, results, and activities of the group meetings, the demo, and the music sessions.Wed, 19 Jun 2019 14:51:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10573Data Structures for the Cloud and External Memory Data (Dagstuhl Seminar 19051)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10572
This report documents the program and the outcomes of Dagstuhl Seminar 16101 "Data Structures for the Cloud and External Memory Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due totechnological advances. The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal theneed for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures. The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.Wed, 19 Jun 2019 14:50:44 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10572Practical Yet Composably Secure Cryptographic Protocols (Dagstuhl Seminar 19042)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10571
This report documents the program and the outcomes of Dagstuhl Seminar 19042 "Practical Yet Composably Secure Cryptographic Protocols". The workshop's main aim was to enhance the community's understanding of (1) what a good model was for how various protocols and systems co-exist in a larger system; (2) how to model important tasks and security protocols in such a model; (3) how to prove security of protocols in such a model.Wed, 19 Jun 2019 14:50:32 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10571New Horizons in Parameterized Complexity (Dagstuhl Seminar 19041)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10570
This report documents the program and the outcomes of Dagstuhl Seminar 19041 "New Horizons in Parameterized Complexity". Parameterized Complexity is celebrating its 30th birthday in 2019. In these three decades, there has been tremendous progress in developing the area. The central vision of Parameterized Complexity through all these years has been to provide the algorithmic and complexity-theoretic toolkit for studying multivariate algorithmics in different disciplines and subfields of Computer Science. These tools are universal as they did not only help in the development of the core of Parameterized Complexity, but also led to its success in other subfields of Computer Science such as Approximation Algorithms, Computational Social Choice, Computational Geometry, problems solvable in P (polynomial time), to name a few. In the last few years, we have witnessed several exciting developments of new parameterized techniques and tools in the following subfields of Computer Science and Optimization: Mathematical Programming, Computational Linear Algebra, Computational Counting, Derandomization, and Approximation Algorithms. The main objective of the seminar was to initiate the discussion on which of the recent domain-specific algorithms and complexity advances can become useful in other domains.Wed, 19 Jun 2019 14:50:08 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10570Conditional Logics and Conditional Reasoning: New Joint Perspectives (Dagstuhl Seminar 19032)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10569
In the last decades, with the emergence of artificial intelligence, a large number of logics called conditional logics have been introduced to model our conditional reasoning captured by so--called conditionals, which are statements of the form `if A then B'. More recently, conditional reasoning has also come under scrutiny by psychologists, yet with more pragmatic and empirical considerations. The main objective of this seminar was to provide an opportunity for these different communities working on that topic to meet and reinforce their ties. We focused on three specific issues. First, we investigated how people's intuitions about `counterpossibles' can be understood empirically and classified with respect to the theoretical accounts of conditional logics. Second, we reconsidered the various semantics of system P and we wondered to which extent pragmatics plays a role in the relevance relation between the antecedant and the consequent of a conditional. Third, we strove to apply the recent advances in proof theory and correspondence theory to conditional logics. These working groups were preceded by short talks and tutorials. Wed, 19 Jun 2019 14:49:53 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10569Logics for Dependence and Independence (Dagstuhl Seminar 19031)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10568
This report documents the programme and outcomes of Dagstuhl Seminar 19031 "Logics for Dependence and Independence". This seminar served as a follow-up seminar to the highly successful seminars "Dependence Logic: Theory and Applications" (13071) and "Logics for Dependence and Independence" (15261). A key objective of the seminar was to bring together researchers working in dependence logic and in the application areas so that they can communicate state-of-the-art advances and embark on a systematic interaction. The goal was especially to reach those researchers who have recently started working in this thriving area as well as researchers working on several aspects of database theory, separation logic, and logics of uncertainy.Wed, 19 Jun 2019 14:49:30 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10568Joint Processing of Language and Visual Data for Better Automated Understanding (Dagstuhl Seminar 19021)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10567
This report documents the program and the outcomes of Dagstuhl Seminar 19021 "Joint Processing of Language and Visual Data for Better Automated Understanding". It includes a discussion of the motivation and overall organization, the abstracts of the talks, and a report of each working group.Wed, 19 Jun 2019 14:49:13 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10567Dagstuhl Reports, Volume 8, Issue 12, December 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10566
Dagstuhl Reports, Volume 8, Issue 12, December 2018, Complete IssueTue, 18 Jun 2019 09:00:57 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10566Dagstuhl Reports, Volume 8, Issue 11, November 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10565
Dagstuhl Reports, Volume 8, Issue 11, November 2018, Complete IssueTue, 18 Jun 2019 09:00:49 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10565Dagstuhl Reports, Table of Contents, Volume 8, Issue 12, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10564
Dagstuhl Reports, Table of Contents, Volume 8, Issue 12, 2018Tue, 18 Jun 2019 09:00:43 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10564Dagstuhl Reports, Table of Contents, Volume 8, Issue 11, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10563
Dagstuhl Reports, Table of Contents, Volume 8, Issue 11, 2018Tue, 18 Jun 2019 09:00:33 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10563Dagstuhl Reports, Volume 8, Issue 10, October 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10562
Dagstuhl Reports, Volume 8, Issue 10, October 2018, Complete IssueTue, 18 Jun 2019 09:00:25 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10562Dagstuhl Reports, Table of Contents, Volume 8, Issue 10, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10561
Dagstuhl Reports, Table of Contents, Volume 8, Issue 10, 2018Tue, 18 Jun 2019 09:00:17 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10561Dagstuhl Reports, Volume 8, Issue 9, September 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10560
Dagstuhl Reports, Volume 8, Issue 9, September 2018, Complete IssueTue, 18 Jun 2019 09:00:12 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10560Dagstuhl Reports, Table of Contents, Volume 8, Issue 9, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10559
Dagstuhl Reports, Table of Contents, Volume 8, Issue 9, 2018Tue, 18 Jun 2019 09:00:02 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10559LIPIcs, Volume 128, CPM'19, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10557
LIPIcs, Volume 128, CPM'19, Complete VolumeTue, 18 Jun 2019 08:09:18 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10557LIPIcs, Volume 129, SoCG'19, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10556
LIPIcs, Volume 129, SoCG'19, Complete VolumeTue, 18 Jun 2019 08:09:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10556Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10471
Front Matter, Table of Contents, Preface, Conference OrganizationMon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10471How to Exploit Periodicity (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10472
Periodicity is a fundamental combinatorial property of strings. We say that p is a period of a string s[1..n] when s[i]=s[i+p] for every i such that both s[i] and s[i+p] are defined. While this notion is interesting on its own, it can be often used as a tool for designing efficient algorithms. At a high level, such algorithms often operate differently depending on whether a given string does or does not have a small period, where small usually means smaller than half of its length (or, say, quarter). In other words, we design an algorithm that is efficient if the given string is repetitive, and another algorithm that is efficient if the given string is non-repetitive, in every case carefully exploiting either the periodicity or the fact that input looks sufficiently "random", and then choose the appropriate algorithm depending on the input. Of course, in some cases, one needs to proceed in a more complex manner, for example by classifying the whole string look at its substrings and process each of them differently depending on its structure.I will survey results, mostly connected to different version of pattern matching, that are based on this paradigm. This will include the recent generalization of periodicity that can be applied in approximate pattern matching, and some examples of how the notion of periodicity can be applied to design a better data structure.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10472Some Variations on Lyndon Words (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10473
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10473Stringology Combats Microbiological Threats (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10474
A major concern worldwide is the acquisition of antibiotic resistance by pathogenic bacteria. Genomic elements carrying resistance and virulence function can be acquired through horizontal gene transfer, yielding a broad spread of evolutionary successful elements, both within and in between species, with devastating effect. Recent advances in pyrosequencing techniques, combined with global efforts to study microbial adaptation to a wide range of ecological niches (and in particular to life in host tissues that we perceive as pathogenesis), yield huge and rapidly-growing databases of microbial genomes.This big new data statistically empowers genomic-context based approaches to functional analysis: the idea is that groups of genes that are clustered locally together across many genomes usually express protein products that interact in the same biological pathway, and thus the function of a new, uncharacterized gene can be deciphered based on the previously characterized genes that are co-localized with it in the same gene cluster. Identifying and interpreting microbial gene context in huge genomic data requires efficient string-based data mining algorithms. Additionally, new computational challenges are raised by the need to study the grammar and evolutionary spreading patterns of microbial gene context. In this talk, we will review some classical combinatorial pattern matching and data mining problems, previously inspired by this application domain. We will re-examine the biological assumptions behind the previously proposed models in light of some new biological observations. We will consider the computational challenges arising in accomodating the new biological observations, and in exploiting them to scale up the algorithmic solutions to the huge new data. Our goal is to inspire interesting new problems that harness Stringology to the study of microbial adaptation and to the fight against microbiological threats ...Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10474Optimal Rank and Select Queries on Dictionary-Compressed Text
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10475
We study the problem of supporting queries on a string S of length n within a space bounded by the size gamma of a string attractor for S. In the paper introducing string attractors it was shown that random access on S can be supported in optimal O(log(n/gamma)/log log n) time within O(gamma polylog n) space. In this paper, we extend this result to rank and select queries and provide lower bounds matching our upper bounds on alphabets of polylogarithmic size. Our solutions are given in the form of a space-time trade-off that is more general than the one previously known for grammars and that improves existing bounds on LZ77-compressed text by a log log n time-factor in select queries. We also provide matching lower and upper bounds for partial sum and predecessor queries within attractor-bounded space, and extend our lower bounds to encompass navigation of dictionary-compressed tree representations.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10475A 2-Approximation Algorithm for the Complementary Maximal Strip Recovery Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10476
The Maximal Strip Recovery problem (MSR) and its complementary (CMSR) are well-studied NP-hard problems in computational genomics. The input of these dual problems are two signed permutations. The goal is to delete some gene markers from both permutations, such that, in the remaining permutations, each gene marker has at least one common neighbor. Equivalently, the resulting permutations could be partitioned into common strips of length at least two. Then MSR is to maximize the number of remaining genes, while the objective of CMSR is to delete the minimum number of gene markers. In this paper, we present a new approximation algorithm for the Complementary Maximal Strip Recovery (CMSR) problem. Our approximation factor is 2, improving the currently best 7/3-approximation algorithm. Although the improvement on the factor is not huge, the analysis is greatly simplified by a compensating method, commonly referred to as the non-oblivious local search technique. In such a method a substitution may not always increase the value of the current solution (it sometimes may even decrease the solution value), though it always improves the value of another function seemingly unrelated to the objective function.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10476Sufficient Conditions for Efficient Indexing Under Different Matchings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10477
The most important task derived from the massive digital data accumulation in the world, is efficient access to this data, hence the importance of indexing. In the last decade, many different types of matching relations were defined, each requiring an efficient indexing scheme. Cole and Hariharan in a ground breaking paper [Cole and Hariharan, SIAM J. Comput., 33(1):26-42, 2003], formulate sufficient conditions for building an efficient indexing for quasi-suffix collections, collections that behave as suffixes. It was shown that known matchings, including parameterized, 2-D array and order preserving matchings, fit their indexing settings. In this paper, we formulate more basic sufficient conditions based on the order relation derived from the matching relation itself, our conditions are more general than the previously known conditions.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10477Space-Efficient Computation of the LCP Array from the Burrows-Wheeler Transform
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10478
We show that the Longest Common Prefix Array of a text collection of total size n on alphabet [1,sigma] can be computed from the Burrows-Wheeler transformed collection in O(n log sigma) time using o(n log sigma) bits of working space on top of the input and output. Our result improves (on small alphabets) and generalizes (to string collections) the previous solution from Beller et al., which required O(n) bits of extra working space. We also show how to merge the BWTs of two collections of total size n within the same time and space bounds. The procedure at the core of our algorithms can be used to enumerate suffix tree intervals in succinct space from the BWT, which is of independent interest. An engineered implementation of our first algorithm on DNA alphabet induces the LCP of a large (16 GiB) collection of short (100 bases) reads at a rate of 2.92 megabases per second using in total 1.5 Bytes per base in RAM. Our second algorithm merges the BWTs of two short-reads collections of 8 GiB each at a rate of 1.7 megabases per second and uses 0.625 Bytes per base in RAM. An extension of this algorithm that computes also the LCP array of the merged collection processes the data at a rate of 1.48 megabases per second and uses 1.625 Bytes per base in RAM.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10478Safe and Complete Algorithms for Dynamic Programming Problems, with an Application to RNA Folding
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10479
Many bioinformatics problems admit a large number of solutions, with no way of distinguishing the correct one among them. One approach of coping with this issue is to look at the partial solutions common to all solutions. Such partial solutions have been called safe, and an algorithm outputting all safe solutions has been called safe and complete. In this paper we develop a general technique that automatically provides a safe and complete algorithm to problems solvable by dynamic programming. We illustrate it by applying it to the bioinformatics problem of RNA folding, assuming the simplistic folding model maximizing the number of paired bases. Our safe and complete algorithm has time complexity O(n^3M(n)) and space complexity O(n^3) where n is the length of the RNA sequence and M(n) in Omega(n) is the time complexity of arithmetic operations on O(n)-bit integers. We also implement this algorithm and show that, despite an exponential number of optimal solutions, our algorithm is efficient in practice.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10479Conversion from RLBWT to LZ77
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10480
Converting a compressed format of a string into another compressed format without an explicit decompression is one of the central research topics in string processing. We discuss the problem of converting the run-length Burrows-Wheeler Transform (RLBWT) of a string into Lempel-Ziv 77 (LZ77) phrases of the reversed string. The first results with Policriti and Prezza's conversion algorithm [Algorithmica 2018] were O(n log r) time and O(r) working space for length of the string n, number of runs r in the RLBWT, and number of LZ77 phrases z. Recent results with Kempa's conversion algorithm [SODA 2019] are O(n / log n + r log^{9} n + z log^{9} n) time and O(n / log_{sigma} n + r log^{8} n) working space for the alphabet size sigma of the RLBWT. In this paper, we present a new conversion algorithm by improving Policriti and Prezza's conversion algorithm where dynamic data structures for general purpose are used. We argue that these dynamic data structures can be replaced and present new data structures for faster conversion. The time and working space of our conversion algorithm with new data structures are O(n min{log log n, sqrt{(log r)/(log log r)}}) and O(r), respectively.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10480Fully-Functional Bidirectional Burrows-Wheeler Indexes and Infinite-Order De Bruijn Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10481
Given a string T on an alphabet of size sigma, we describe a bidirectional Burrows-Wheeler index that takes O(|T| log sigma) bits of space, and that supports the addition and removal of one character, on the left or right side of any substring of T, in constant time. Previously known data structures that used the same space allowed constant-time addition to any substring of T, but they could support removal only from specific substrings of T. We also describe an index that supports bidirectional addition and removal in O(log log |T|) time, and that takes a number of words proportional to the number of left and right extensions of the maximal repeats of T. We use such fully-functional indexes to implement bidirectional, frequency-aware, variable-order de Bruijn graphs with no upper bound on their order, and supporting natural criteria for increasing and decreasing the order during traversal.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10481Entropy Lower Bounds for Dictionary Compression
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10482
We show that a wide class of dictionary compression methods (including LZ77, LZ78, grammar compressors as well as parsing-based structures) require |S|H_k(S) + Omega (|S|k log sigma/log_sigma |S|) bits to encode their output. This matches known upper bounds and improves the information-theoretic lower bound of |S|H_k(S). To this end, we abstract the crucial properties of parsings created by those methods, construct a certain family of strings and analyze the parsings of those strings. We also show that for k = alpha log_sigma |S|, where 0 < alpha < 1 is a constant, the aforementioned methods produce an output of size at least 1/(1-alpha)|S|H_k(S) bits. Thus our results separate dictionary compressors from context-based one (such as PPM) and BWT-based ones, as the those include methods achieving |S|H_k(S) + O(sigma^k log sigma) bits, i.e. the redundancy depends on k and sigma but not on |S|.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10482A New Class of Searchable and Provably Highly Compressible String Transformations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10483
The Burrows-Wheeler Transform is a string transformation that plays a fundamental role for the design of self-indexing compressed data structures. Over the years, researchers have successfully extended this transformation outside the domains of strings. However, efforts to find non-trivial alternatives of the original, now 25 years old, Burrows-Wheeler string transformation have met limited success. In this paper we bring new lymph to this area by introducing a whole new family of transformations that have all the "myriad virtues" of the BWT: they can be computed and inverted in linear time, they produce provably highly compressible strings, and they support linear time pattern search directly on the transformed string. This new family is a special case of a more general class of transformations based on context adaptive alphabet orderings, a concept introduced here. This more general class includes also the Alternating BWT, another invertible string transforms recently introduced in connection with a generalization of Lyndon words.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10483Compressed Multiple Pattern Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10484
Given d strings over the alphabet {0,1,...,sigma{-}1}, the classical Aho - Corasick data structure allows us to find all occ occurrences of the strings in any text T in O(|T| + occ) time using O(m log m) bits of space, where m is the number of edges in the trie containing the strings. Fix any constant epsilon in (0, 2). We describe a compressed solution for the problem that, provided sigma <=m^delta for a constant delta < 1, works in O(|T| 1/epsilon log(1/epsilon) + occ) time, which is O(|T| + occ) since epsilon is constant, and occupies mH_k + 1.443 m + epsilon m + O(d log m/d) bits of space, for all 0 <= k <= max{0,alpha log_sigma m - 2} simultaneously, where alpha in (0,1) is an arbitrary constant and H_k is the kth-order empirical entropy of the trie. Hence, we reduce the 3.443m term in the space bounds of previously best succinct solutions to (1.443 + epsilon)m, thus solving an open problem posed by Belazzougui. Further, we notice that L = log binom{sigma (m+1)}{m} - O(log(sigma m)) is a worst-case space lower bound for any solution of the problem and, for d = o(m) and constant epsilon, our approach allows to achieve L + epsilon m bits of space, which gives an evidence that, for d = o(m), the space of our data structure is theoretically optimal up to the epsilon m additive term and it is hardly possible to eliminate the term 1.443m. In addition, we refine the space analysis of previous works by proposing a more appropriate definition for H_k. We also simplify the construction for practice adapting the fixed block compression boosting technique, then implement our data structure, and conduct a number of experiments showing that it is comparable to the state of the art in terms of time and is superior in space.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10484Hamming Distance Completeness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10485
We show, given a binary integer function diamond that is piecewise polynomial, that (+,diamond) vector products are equivalent under one-to-polylog reductions to the computation of the Hamming distance. Examples include the dominance and l_{2p+1} distances for constant p. Our results imply equivalence (up to polylog factors) between the complexity of computing All Pairs Hamming Distance, All Pairs l_{2p+1} Distance and Dominance Matrix Product, and equivalence between Hamming Distance Pattern Matching, l_{2p+1} Pattern Matching and Less-Than Pattern Matching. The resulting algorithms for l_{2p+1} Pattern Matching and All Pairs l_{2p+1}, for 2p+1 = 3,5,7,... are likely to be optimal, given lack of progress in improving upper bounds for Hamming distance in the past 30 years. While reductions between selected pairs of products were presented in the past, our work is the first to generalize them to a general class of functions, showing that a wide class of "intermediate" complexity problems are in fact equivalent.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10485Approximating Approximate Pattern Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10486
Given a text T of length n and a pattern P of length m, the approximate pattern matching problem asks for computation of a particular distance function between P and every m-substring of T. We consider a (1 +/- epsilon) multiplicative approximation variant of this problem, for l_p distance function. In this paper, we describe two (1+epsilon)-approximate algorithms with a runtime of O~(n/epsilon) for all (constant) non-negative values of p. For constant p >= 1 we show a deterministic (1+epsilon)-approximation algorithm. Previously, such run time was known only for the case of l_1 distance, by Gawrychowski and Uznanski [ICALP 2018] and only with a randomized algorithm. For constant 0 <= p <= 1 we show a randomized algorithm for the l_p, thereby providing a smooth tradeoff between algorithms of Kopelowitz and Porat [FOCS 2015, SOSA 2018] for Hamming distance (case of p=0) and of Gawrychowski and Uznanski for l_1 distance.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10486Cartesian Tree Matching and Indexing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10487
We introduce a new metric of match, called Cartesian tree matching, which means that two strings match if they have the same Cartesian trees. Based on Cartesian tree matching, we define single pattern matching for a text of length n and a pattern of length m, and multiple pattern matching for a text of length n and k patterns of total length m. We present an O(n+m) time algorithm for single pattern matching, and an O((n+m) log k) deterministic time or O(n+m) randomized time algorithm for multiple pattern matching. We also define an index data structure called Cartesian suffix tree, and present an O(n) randomized time algorithm to build the Cartesian suffix tree. Our efficient algorithms for Cartesian tree matching use a representation of the Cartesian tree, called the parent-distance representation.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10487Indexing the Bijective BWT
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10488
The Burrows-Wheeler transform (BWT) is a permutation whose applications are prevalent in data compression and text indexing. The bijective BWT is a bijective variant of it that has not yet been studied for text indexing applications. We fill this gap by proposing a self-index built on the bijective BWT . The self-index applies the backward search technique of the FM-index to find a pattern P with O(|P| lg|P|) backward search steps.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10488On Maximal Repeats in Compressed Strings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10489
This paper presents and proves a new non-trivial upper bound on the number of maximal repeats of compressed strings. Using Theorem 1 of Raffinot's article "On Maximal Repeats in Strings", this upper bound can be directly translated into an upper bound on the number of nodes in the Compacted Directed Acyclic Word Graphs of compressed strings.More formally, this paper proves that the number of maximal repeats in a string with z (self-referential) LZ77-factors and without q-th powers is at most 3q(z+1)^3-2. Also, this paper proves that for 2000 <= z <= q this upper bound is tight up to a constant factor.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10489Dichotomic Selection on Words: A Probabilistic Analysis
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10490
The paper studies the behaviour of selection algorithms that are based on dichotomy principles. On the entry formed by an ordered list L and a searched element x not in L, they return the interval of the list L the element x belongs to. We focus here on the case of words, where dichotomy principles lead to a selection algorithm designed by Crochemore, Hancart and Lecroq, which appears to be "quasi-optimal". We perform a probabilistic analysis of this algorithm that exhibits its quasi-optimality on average.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10490Finding a Small Number of Colourful Components
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10491
A partition (V_1,...,V_k) of the vertex set of a graph G with a (not necessarily proper) colouring c is colourful if no two vertices in any V_i have the same colour and every set V_i induces a connected graph. The Colourful Partition problem, introduced by Adamaszek and Popa, is to decide whether a coloured graph (G,c) has a colourful partition of size at most k. This problem is related to the Colourful Components problem, introduced by He, Liu and Zhao, which is to decide whether a graph can be modified into a graph whose connected components form a colourful partition by deleting at most p edges.Despite the similarities in their definitions, we show that Colourful Partition and Colourful Components may have different complexities for restricted instances. We tighten known NP-hardness results for both problems by closing a number of complexity gaps. In addition, we prove new hardness and tractability results for Colourful Partition. In particular, we prove that deciding whether a coloured graph (G,c) has a colourful partition of size 2 is NP-complete for coloured planar bipartite graphs of maximum degree 3 and path-width 3, but polynomial-time solvable for coloured graphs of treewidth 2. Rather than performing an ad hoc study, we use our classical complexity results to guide us in undertaking a thorough parameterized study of Colourful Partition. We show that this leads to suitable parameters for obtaining FPT results and moreover prove that Colourful Components and Colourful Partition may have different parameterized complexities, depending on the chosen parameter.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10491Streaming Dictionary Matching with Mismatches
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10492
In the k-mismatch problem we are given a pattern of length m and a text and must find all locations where the Hamming distance between the pattern and the text is at most k. A series of recent breakthroughs have resulted in an ultra-efficient streaming algorithm for this problem that requires only O(k log m/k) space [Clifford, Kociumaka, Porat, SODA 2019]. In this work, we consider a strictly harder problem called dictionary matching with k mismatches, where we are given a dictionary of d patterns of lengths at most m and must find all their k-mismatch occurrences in the text, and show the first streaming algorithm for it. The algorithm uses O(k d log^k d polylog m) space and processes each position of the text in O(k log^k d polylog m + occ) time, where occ is the number of k-mismatch occurrences of the patterns that end at this position. The algorithm is randomised and outputs correct answers with high probability.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10492Quasi-Periodicity in Streams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10493
In this work, we show two streaming algorithms for computing the length of the shortest cover of a string of length n. We start by showing a two-pass algorithm that uses O(log^2 n) space and then show a one-pass streaming algorithm that uses O(sqrt{n log n}) space. Both algorithms run in near-linear time. The algorithms are randomized and compute the answer incorrectly with probability inverse-polynomial in n. We also show that there is no sublinear-space streaming algorithm for computing the length of the shortest seed of a string.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10493Computing Runs on a Trie
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10494
A maximal repetition, or run, in a string, is a maximal periodic substring whose smallest period is at most half the length of the substring. In this paper, we consider runs that correspond to a path on a trie, or in other words, on a rooted edge-labeled tree where the endpoints of the path must be a descendant/ancestor of the other. For a trie with n edges, we show that the number of runs is less than n. We also show an O(n sqrt{log n}log log n) time and O(n) space algorithm for counting and finding the shallower endpoint of all runs. We further show an O(n log n) time and O(n) space algorithm for finding both endpoints of all runs. We also discuss how to improve the running time even more.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10494Linking BWT and XBW via Aho-Corasick Automaton: Applications to Run-Length Encoding
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10495
The boom of genomic sequencing makes compression of sets of sequences inescapable. This underlies the need for multi-string indexing data structures that helps compressing the data. The most prominent example of such data structures is the Burrows-Wheeler Transform (BWT), a reversible permutation of a text that improves its compressibility. A similar data structure, the eXtended Burrows-Wheeler Transform (XBW), is able to index a tree labelled with alphabet symbols. A link between a multi-string BWT and the Aho-Corasick automaton has already been found and led to a way to build a XBW from a multi-string BWT. We exhibit a stronger link between a multi-string BWT and a XBW by using the order of the concatenation in the multi-string. This bijective link has several applications: first, it allows one to build one data structure from the other; second, it enables one to compute an ordering of the input strings that optimises a Run-Length measure (i.e., the compressibility) of the BWT or of the XBW.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10495Quasi-Linear-Time Algorithm for Longest Common Circular Factor
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10496
We introduce the Longest Common Circular Factor (LCCF) problem in which, given strings S and T of length at most n, we are to compute the longest factor of S whose cyclic shift occurs as a factor of T. It is a new similarity measure, an extension of the classic Longest Common Factor. We show how to solve the LCCF problem in O(n log^4 n) time using O(n log^2 n) space.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10496Simulating the DNA Overlap Graph in Succinct Space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10497
Converting a set of sequencing reads into a lossless compact data structure that encodes all the relevant biological information is a major challenge. The classical approaches are to build the string graph or the de Bruijn graph (dBG) of some order k. Each has advantages over the other depending on the application. Still, the ideal setting would be to have an index of the reads that is easy to build and can be adapted to any type of biological analysis. In this paper we propose rBOSS, a new data structure based on the Burrows-Wheeler Transform (BWT), which gets close to that ideal. Our rBOSS simultaneously encodes all the dBGs of a set of sequencing reads up to some order k, and for any dBG node v, it can compute in O(k) time all the other nodes whose labels have an overlap of at least m characters with the label of v, with m being a parameter. If we choose the parameter k equal to the size of the reads (assuming that all have equal length), then we can simulate the overlap graph of the read set. Instead of storing the edges of this graph explicitly, rBOSS computes them on the fly as we traverse the graph. As most BWT-based structures, rBOSS is unidirectional, meaning that we can retrieve only the suffix overlaps of the nodes. However, we exploit the property of the DNA reverse complements to simulate bi-directionality. We implemented a genome assembler on top of rBOSS to demonstrate its usefulness. The experimental results show that, using k=100, our rBOSS-based assembler can process ~500K reads of 150 characters long each (a FASTQ file of 185 MB) in less than 15 minutes and using 110 MB in total. It produces contigs of mean sizes over 10,000, which is twice the size obtained by using a pure de Bruijn graph of fixed length k.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10497Faster Queries for Longest Substring Palindrome After Block Edit
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10498
Palindromes are important objects in strings which have been extensively studied from combinatorial, algorithmic, and bioinformatics points of views. Manacher [J. ACM 1975] proposed a seminal algorithm that computes the longest substring palindromes (LSPals) of a given string in O(n) time, where n is the length of the string. In this paper, we consider the problem of finding the LSPal after the string is edited. We present an algorithm that uses O(n) time and space for preprocessing, and answers the length of the LSPals in O(l + log log n) time, after a substring in T is replaced by a string of arbitrary length l. This outperforms the query algorithm proposed in our previous work [CPM 2018] that uses O(l + log n) time for each query.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10498A Rearrangement Distance for Fully-Labelled Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10499
The problem of comparing trees representing the evolutionary histories of cancerous tumors has turned out to be crucial, since there is a variety of different methods which typically infer multiple possible trees. A departure from the widely studied setting of classical phylogenetics, where trees are leaf-labelled, tumoral trees are fully labelled, i.e., every vertex has a label.In this paper we provide a rearrangement distance measure between two fully-labelled trees. This notion originates from two operations: one which modifies the topology of the tree, the other which permutes the labels of the vertices, hence leaving the topology unaffected. While we show that the distance between two trees in terms of each such operation alone can be decided in polynomial time, the more general notion of distance when both operations are allowed is NP-hard to decide. Despite this result, we show that it is fixed-parameter tractable, and we give a 4-approximation algorithm when one of the trees is binary.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10499On the Size of Overlapping Lempel-Ziv and Lyndon Factorizations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10500
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10500Online Algorithms for Constructing Linear-Size Suffix Trie
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10501
The suffix trees are fundamental data structures for various kinds of string processing. The suffix tree of a string T of length n has O(n) nodes and edges, and the string label of each edge is encoded by a pair of positions in T. Thus, even after the tree is built, the input text T needs to be kept stored and random access to T is still needed. The linear-size suffix tries (LSTs), proposed by Crochemore et al. [Linear-size suffix tries, TCS 638:171-178, 2016], are a "stand-alone" alternative to the suffix trees. Namely, the LST of a string T of length n occupies O(n) total space, and supports pattern matching and other tasks in the same efficiency as the suffix tree without the need to store the input text T. Crochemore et al. proposed an offline algorithm which transforms the suffix tree of T into the LST of T in O(n log sigma) time and O(n) space, where sigma is the alphabet size. In this paper, we present two types of online algorithms which "directly" construct the LST, from right to left, and from left to right, without constructing the suffix tree as an intermediate structure. Both algorithms construct the LST incrementally when a new symbol is read, and do not access to the previously read symbols. The right-to-left construction algorithm works in O(n log sigma) time and O(n) space and the left-to-right construction algorithm works in O(n (log sigma + log n / log log n)) time and O(n) space. The main feature of our algorithms is that the input text does not need to be stored.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10501Searching Long Repeats in Streams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10502
We consider two well-known related problems: Longest Repeated Substring (LRS) and Longest Repeated Reversed Substring (LRRS). Their streaming versions cannot be solved exactly; we show that only approximate solutions by Monte Carlo algorithms are possible, and prove a lower bound on consumed memory. For both problems, we present purely linear-time Monte Carlo algorithms working in O(E + n/E) space, where E is the additive approximation error. Within the same space bounds, we then present nearly real-time solutions, which require O(log n) time per symbol and O(n + n/E log n) time overall. The working space exactly matches the lower bound whenever E=O(n^{0.5}) and the size of the alphabet is Omega(n^{0.01}).Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10502Computing the Antiperiod(s) of a String
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10503
A string S[1,n] is a power (or repetition or tandem repeat) of order k and period n/k, if it can be decomposed into k consecutive identical blocks of length n/k. Powers and periods are fundamental structures in the study of strings and algorithms to compute them efficiently have been widely studied. Recently, Fici et al. (Proc. ICALP 2016) introduced an antipower of order k to be a string composed of k distinct blocks of the same length, n/k, called the antiperiod. An arbitrary string will have antiperiod t if it is prefix of an antipower with antiperiod t. In this paper, we describe efficient algorithm for computing the smallest antiperiod of a string S of length n in O(n) time. We also describe an algorithm to compute all the antiperiods of S that runs in O(n log n) time.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10503Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10404
Front Matter, Table of Contents, Preface, Conference OrganizationMon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10404A Geometric Data Structure from Neuroscience (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10405
An intriguing geometric primitive, "expand-and-sparsify", has been found in the olfactory system of the fly and several other organisms. It maps an input vector to a much higher-dimensional sparse representation, using a random linear transformation followed by winner-take-all thresholding.I will show that this representation has a variety of formal properties, such as locality preservation, that make it an attractive data structure for algorithms and machine learning. In particular, mimicking the fly's circuitry yields algorithms for similarity search and for novelty detection that have provable guarantees as well as having practical performance that is competitive with state-of-the-art methods.This talk is based on work with Saket Navlakha (Salk Institute), Chuck Stevens (Salk Institute), and Chris Tosh (Columbia).Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10405Some Geometric and Computational Challenges Arising in Structural Molecular Biology (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10406
Computational protein design is a transformative field with exciting prospects for advancing both basic science and translational medical research. New algorithms blend discrete and continuous geometry to address the challenges of creating designer proteins. I will discuss recent progress in this area and some interesting open problems.I will motivate this talk by discussing how, by using continuous geometric representations within a discrete optimization framework, broadly-neutralizing anti-HIV-1 antibodies were computationally designed that are now being tested in humans - the designed antibodies are currently in eight clinical trials, one of which is Phase 2a (NCT03721510). These continuous representations model the flexibility and dynamics of biological macromolecules, which are an important structural determinant of function. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. These distributions are not fully constrained by the limited geometric information from experiments, making the problem ill-posed in the sense of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem must be regularized by making (hopefully reasonable) assumptions. I will present new ways to both represent and visualize correlated inter-domain protein motions. We use Bingham distributions, based on a quaternion fit to circular moments of a physics-based quadratic form. To find the optimal solution for the distribution, we designed an efficient, provable branch-and-bound algorithm that exploits the structure of analytical solutions to the trigonometric moment problem. Hence, continuous conformational PDFs can be determined directly from NMR measurements. The representation works especially well for multi-domain systems with broad conformational distributions. For more information please see Y. Qi et al. Jour. Mol. Biol. 2018; 430(18 Pt B):3412-3426. doi: 10.1016/j.jmb.2018.06.022. Ultimately, this method has parallels to other branches of geometric computing that balance discrete and continuous representations, including physical geometric algorithms, robotics, computational geometry, and robust optimization. I will advocate for using continuous distributions for protein modeling, and describe future work and open problems.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10406A New Lower Bound for Semigroup Orthogonal Range Searching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10407
We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle's result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao's influential result had shown that the problem is already non-trivial in one dimension [Yao, 1982]: using m units of space, the query time Q(n) must be Omega(alpha(m,n) + n/(m-n+1)) where alpha(*,*) is the inverse Ackermann's function, a very slowly growing function. In d dimensions, Bernard Chazelle [Chazelle, 1990] proved that the query time must be Q(n) = Omega((log_beta n)^{d-1}) where beta = 2m/n. Chazelle's lower bound is known to be tight for when space consumption is "high" i.e., m = Omega(n log^{d+epsilon}n). We have two main results. The first is a lower bound that shows Chazelle's lower bound was not tight for "low space": we prove that we must have m Q(n) = Omega(n (log n log log n)^{d-1}). Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup. As a result, we believe to close the gap one must study lower bounds for non-idempotent semigroups or building data structures for idempotent semigroups. We develope significantly new ideas for both of our results that could be useful in pursuing either of these directions.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10407Independent Range Sampling, Revisited Again
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10408
We revisit the range sampling problem: the input is a set of points where each point is associated with a real-valued weight. The goal is to store them in a structure such that given a query range and an integer k, we can extract k independent random samples from the points inside the query range, where the probability of sampling a point is proportional to its weight. This line of work was initiated in 2014 by Hu, Qiao, and Tao and it was later followed up by Afshani and Wei. The first line of work mostly studied unweighted but dynamic version of the problem in one dimension whereas the second result considered the static weighted problem in one dimension as well as the unweighted problem in 3D for halfspace queries. We offer three main results and some interesting insights that were missed by the previous work: We show that it is possible to build efficient data structures for range sampling queries if we allow the query time to hold in expectation (the first result), or obtain efficient worst-case query bounds by allowing the sampling probability to be approximately proportional to the weight (the second result). The third result is a conditional lower bound that shows essentially one of the previous two concessions is needed. For instance, for the 3D range sampling queries, the first two results give efficient data structures with near-linear space and polylogarithmic query time whereas the lower bound shows with near-linear space the worst-case query time must be close to n^{2/3}, ignoring polylogarithmic factors. Up to our knowledge, this is the first such major gap between the expected and worst-case query time of a range searching problem.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10408An Efficient Algorithm for Generalized Polynomial Partitioning and Its Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10409
In 2015, Guth proved that if S is a collection of n g-dimensional semi-algebraic sets in R^d and if D >= 1 is an integer, then there is a d-variate polynomial P of degree at most D so that each connected component of R^d \ Z(P) intersects O(n/D^{d-g}) sets from S. Such a polynomial is called a generalized partitioning polynomial. We present a randomized algorithm that computes such polynomials efficiently - the expected running time of our algorithm is linear in |S|. Our approach exploits the technique of quantifier elimination combined with that of epsilon-samples.We present four applications of our result. The first is a data structure for answering point-enclosure queries among a family of semi-algebraic sets in R^d in O(log n) time, with storage complexity and expected preprocessing time of O(n^{d+epsilon}). The second is a data structure for answering range search queries with semi-algebraic ranges in O(log n) time, with O(n^{t+epsilon}) storage and expected preprocessing time, where t > 0 is an integer that depends on d and the description complexity of the ranges. The third is a data structure for answering vertical ray-shooting queries among semi-algebraic sets in R^{d} in O(log^2 n) time, with O(n^{d+epsilon}) storage and expected preprocessing time. The fourth is an efficient algorithm for cutting algebraic planar curves into pseudo-segments.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10409Efficient Algorithms for Geometric Partial Matching
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10410
Let A and B be two point sets in the plane of sizes r and n respectively (assume r <= n), and let k be a parameter. A matching between A and B is a family of pairs in A x B so that any point of A cup B appears in at most one pair. Given two positive integers p and q, we define the cost of matching M to be c(M) = sum_{(a, b) in M}||a-b||_p^q where ||*||_p is the L_p-norm. The geometric partial matching problem asks to find the minimum-cost size-k matching between A and B.We present efficient algorithms for geometric partial matching problem that work for any powers of L_p-norm matching objective: An exact algorithm that runs in O((n + k^2)polylog n) time, and a (1 + epsilon)-approximation algorithm that runs in O((n + k sqrt{k})polylog n * log epsilon^{-1}) time. Both algorithms are based on the primal-dual flow augmentation scheme; the main improvements involve using dynamic data structures to achieve efficient flow augmentations. With similar techniques, we give an exact algorithm for the planar transportation problem running in O(min{n^2, rn^{3/2}}polylog n) time.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10410Connecting the Dots (with Minimum Crossings)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10411
We study a prototype Crossing Minimization problem, defined as follows. Let F be an infinite family of (possibly vertex-labeled) graphs. Then, given a set P of (possibly labeled) n points in the Euclidean plane, a collection L subseteq Lines(P)={l: l is a line segment with both endpoints in P}, and a non-negative integer k, decide if there is a subcollection L'subseteq L such that the graph G=(P,L') is isomorphic to a graph in F and L' has at most k crossings. By G=(P,L'), we refer to the graph on vertex set P, where two vertices are adjacent if and only if there is a line segment that connects them in L'. Intuitively, in Crossing Minimization, we have a set of locations of interest, and we want to build/draw/exhibit connections between them (where L indicates where it is feasible to have these connections) so that we obtain a structure in F. Natural choices for F are the collections of perfect matchings, Hamiltonian paths, and graphs that contain an (s,t)-path (a path whose endpoints are labeled). While the objective of seeking a solution with few crossings is of interest from a theoretical point of view, it is also well motivated by a wide range of practical considerations. For example, links/roads (such as highways) may be cheaper to build and faster to traverse, and signals/moving objects would collide/interrupt each other less often. Further, graphs with fewer crossings are preferred for graphic user interfaces.As a starting point for a systematic study, we consider a special case of Crossing Minimization. Already for this case, we obtain NP-hardness and W[1]-hardness results, and ETH-based lower bounds. Specifically, suppose that the input also contains a collection D of d non-crossing line segments such that each point in P belongs to exactly one line in D, and L does not contain line segments between points on the same line in D. Clearly, Crossing Minimization is the case where d=n - then, P is in general position. The case of d=2 is of interest not only because it is the most restricted non-trivial case, but also since it corresponds to a class of graphs that has been well studied - specifically, it is Crossing Minimization where G=(P,L) is a (bipartite) graph with a so called two-layer drawing. For d=2, we consider three basic choices of F. For perfect matchings, we show (i) NP-hardness with an ETH-based lower bound, (ii) solvability in subexponential parameterized time, and (iii) existence of an O(k^2)-vertex kernel. Second, for Hamiltonian paths, we show (i) solvability in subexponential parameterized time, and (ii) existence of an O(k^2)-vertex kernel. Lastly, for graphs that contain an (s,t)-path, we show (i) NP-hardness and W[1]-hardness, and (ii) membership in XP.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10411General Techniques for Approximate Incidences and Their Application to the Camera Posing Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10412
We consider the classical camera pose estimation problem that arises in many computer vision applications, in which we are given n 2D-3D correspondences between points in the scene and points in the camera image (some of which are incorrect associations), and where we aim to determine the camera pose (the position and orientation of the camera in the scene) from this data. We demonstrate that this posing problem can be reduced to the problem of computing epsilon-approximate incidences between two-dimensional surfaces (derived from the input correspondences) and points (on a grid) in a four-dimensional pose space. Similar reductions can be applied to other camera pose problems, as well as to similar problems in related application areas.We describe and analyze three techniques for solving the resulting epsilon-approximate incidences problem in the context of our camera posing application. The first is a straightforward assignment of surfaces to the cells of a grid (of side-length epsilon) that they intersect. The second is a variant of a primal-dual technique, recently introduced by a subset of the authors [Aiger et al., 2017] for different (and simpler) applications. The third is a non-trivial generalization of a data structure Fonseca and Mount [Da Fonseca and Mount, 2010], originally designed for the case of hyperplanes. We present and analyze this technique in full generality, and then apply it to the camera posing problem at hand.We compare our methods experimentally on real and synthetic data. Our experiments show that for the typical values of n and epsilon, the primal-dual method is the fastest, also in practice.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10412Circumscribing Polygons and Polygonizations for Disjoint Line Segments
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10413
Given a planar straight-line graph G=(V,E) in R^2, a circumscribing polygon of G is a simple polygon P whose vertex set is V, and every edge in E is either an edge or an internal diagonal of P. A circumscribing polygon is a polygonization for G if every edge in E is an edge of P.We prove that every arrangement of n disjoint line segments in the plane has a subset of size Omega(sqrt{n}) that admits a circumscribing polygon, which is the first improvement on this bound in 20 years. We explore relations between circumscribing polygons and other problems in combinatorial geometry, and generalizations to R^3.We show that it is NP-complete to decide whether a given graph G admits a circumscribing polygon, even if G is 2-regular. Settling a 30-year old conjecture by Rappaport, we also show that it is NP-complete to determine whether a geometric matching admits a polygonization.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10413Morphing Contact Representations of Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10414
We consider the problem of morphing between contact representations of a plane graph. In a contact representation of a plane graph, vertices are realized by internally disjoint elements from a family of connected geometric objects. Two such elements touch if and only if their corresponding vertices are adjacent. These touchings also induce the same embedding as in the graph. In a morph between two contact representations we insist that at each time step (continuously throughout the morph) we have a contact representation of the same type.We focus on the case when the geometric objects are triangles that are the lower-right half of axis-parallel rectangles. Such RT-representations exist for every plane graph and right triangles are one of the simplest families of shapes supporting this property. Thus, they provide a natural case to study regarding morphs of contact representations of plane graphs.We study piecewise linear morphs, where each step is a linear morph moving the endpoints of each triangle at constant speed along straight-line trajectories. We provide a polynomial-time algorithm that decides whether there is a piecewise linear morph between two RT-representations of a plane triangulation, and, if so, computes a morph with a quadratic number of linear morphs. As a direct consequence, we obtain that for 4-connected plane triangulations there is a morph between every pair of RT-representations where the "top-most" triangle in both representations corresponds to the same vertex. This shows that the realization space of such RT-representations of any 4-connected plane triangulation forms a connected set.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10414When Convexity Helps Collapsing Complexes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10415
This paper illustrates how convexity hypotheses help collapsing simplicial complexes. We first consider a collection of compact convex sets and show that the nerve of the collection is collapsible whenever the union of sets in the collection is convex. We apply this result to prove that the Delaunay complex of a finite point set is collapsible. We then consider a convex domain defined as the convex hull of a finite point set. We show that if the point set samples sufficiently densely the domain, then both the Cech complex and the Rips complex of the point set are collapsible for a well-chosen scale parameter. A key ingredient in our proofs consists in building a filtration by sweeping space with a growing sphere whose center has been fixed and studying events occurring through the filtration. Since the filtration mimics the sublevel sets of a Morse function with a single critical point, we anticipate this work to lay the foundations for a non-smooth, discrete Morse Theory.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10415Optimal Algorithm for Geodesic Farthest-Point Voronoi Diagrams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10416
Let P be a simple polygon with n vertices. For any two points in P, the geodesic distance between them is the length of the shortest path that connects them among all paths contained in P. Given a set S of m sites being a subset of the vertices of P, we present the first randomized algorithm to compute the geodesic farthest-point Voronoi diagram of S in P running in expected O(n + m) time. That is, a partition of P into cells, at most one cell per site, such that every point in a cell has the same farthest site with respect to the geodesic distance. This algorithm can be extended to run in expected O(n + m log m) time when S is an arbitrary set of m sites contained in P.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10416Upward Book Embeddings of st-Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10417
We study k-page upward book embeddings (kUBEs) of st-graphs, that is, book embeddings of single-source single-sink directed acyclic graphs on k pages with the additional requirement that the vertices of the graph appear in a topological ordering along the spine of the book. We show that testing whether a graph admits a kUBE is NP-complete for k >= 3. A hardness result for this problem was previously known only for k = 6 [Heath and Pemmaraju, 1999]. Motivated by this negative result, we focus our attention on k=2. On the algorithmic side, we present polynomial-time algorithms for testing the existence of 2UBEs of planar st-graphs with branchwidth b and of plane st-graphs whose faces have a special structure. These algorithms run in O(f(b)* n+n^3) time and O(n) time, respectively, where f is a singly-exponential function on b. Moreover, on the combinatorial side, we present two notable families of plane st-graphs that always admit an embedding-preserving 2UBE.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10417Bounded Degree Conjecture Holds Precisely for c-Crossing-Critical Graphs with c <= 12
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10418
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10418Preconditioning for the Geometric Transportation Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10419
In the geometric transportation problem, we are given a collection of points P in d-dimensional Euclidean space, and each point is given a supply of mu(p) units of mass, where mu(p) could be a positive or a negative integer, and the total sum of the supplies is 0. The goal is to find a flow (called a transportation map) that transports mu(p) units from any point p with mu(p) > 0, and transports -mu(p) units into any point p with mu(p) < 0. Moreover, the flow should minimize the total distance traveled by the transported mass. The optimal value is known as the transportation cost, or the Earth Mover's Distance (from the points with positive supply to those with negative supply). This problem has been widely studied in many fields of computer science: from theoretical work in computational geometry, to applications in computer vision, graphics, and machine learning.In this work we study approximation algorithms for the geometric transportation problem. We give an algorithm which, for any fixed dimension d, finds a (1+epsilon)-approximate transportation map in time nearly-linear in n, and polynomial in epsilon^{-1} and in the logarithm of the total supply. This is the first approximation scheme for the problem whose running time depends on n as n * polylog(n). Our techniques combine the generalized preconditioning framework of Sherman, which is grounded in continuous optimization, with simple geometric arguments to first reduce the problem to a minimum cost flow problem on a sparse graph, and then to design a good preconditioner for this latter problem.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10419The One-Way Communication Complexity of Dynamic Time Warping Distance
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10420
We resolve the randomized one-way communication complexity of Dynamic Time Warping (DTW) distance. We show that there is an efficient one-way communication protocol using O~(n/alpha) bits for the problem of computing an alpha-approximation for DTW between strings x and y of length n, and we prove a lower bound of Omega(n / alpha) bits for the same problem. Our communication protocol works for strings over an arbitrary metric of polynomial size and aspect ratio, and we optimize the logarithmic factors depending on properties of the underlying metric, such as when the points are low-dimensional integer vectors equipped with various metrics or have bounded doubling dimension. We also consider linear sketches of DTW, showing that such sketches must have size Omega(n).Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10420
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10421
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10421Polyline Simplification has Cubic Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10422
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10422A Spanner for the Day After
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10423
We show how to construct (1+epsilon)-spanner over a set P of n points in R^d that is resilient to a catastrophic failure of nodes. Specifically, for prescribed parameters theta, epsilon in (0,1), the computed spanner G has O(epsilon^{-7d} log^7 epsilon^{-1} * theta^{-6} n log n (log log n)^6) edges. Furthermore, for any k, and any deleted set B subseteq P of k points, the residual graph G \ B is (1+epsilon)-spanner for all the points of P except for (1+theta)k of them. No previous constructions, beyond the trivial clique with O(n^2) edges, were known such that only a tiny additional fraction (i.e., theta) lose their distance preserving connectivity.Our construction works by first solving the exact problem in one dimension, and then showing a surprisingly simple and elegant construction in higher dimensions, that uses the one dimensional construction in a black box fashion.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10423Computing Shapley Values in the Plane
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10424
We consider the problem of computing Shapley values for points in the plane, where each point is interpreted as a player, and the value of a coalition is defined by the area of usual geometric objects, such as the convex hull or the minimum axis-parallel bounding box.For sets of n points in the plane, we show how to compute in roughly O(n^{3/2}) time the Shapley values for the area of the minimum axis-parallel bounding box and the area of the union of the rectangles spanned by the origin and the input points. When the points form an increasing or decreasing chain, the running time can be improved to near-linear. In all these cases, we use linearity of the Shapley values and algebraic methods.We also show that Shapley values for the area of the convex hull or the minimum enclosing disk can be computed in O(n^2) and O(n^3) time, respectively. These problems are closely related to the model of stochastic point sets considered in computational geometry, but here we have to consider random insertion orders of the points instead of a probabilistic existence of points.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10424On the Metric Distortion of Embedding Persistence Diagrams into Separable Hilbert Spaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10425
Persistence diagrams are important descriptors in Topological Data Analysis. Due to the nonlinearity of the space of persistence diagrams equipped with their diagram distances, most of the recent attempts at using persistence diagrams in machine learning have been done through kernel methods, i.e., embeddings of persistence diagrams into Reproducing Kernel Hilbert Spaces, in which all computations can be performed easily. Since persistence diagrams enjoy theoretical stability guarantees for the diagram distances, the metric properties of the feature map, i.e., the relationship between the Hilbert distance and the diagram distances, are of central interest for understanding if the persistence diagram guarantees carry over to the embedding. In this article, we study the possibility of embedding persistence diagrams into separable Hilbert spaces with bi-Lipschitz maps. In particular, we show that for several stable embeddings into infinite-dimensional Hilbert spaces defined in the literature, any lower bound must depend on the cardinalities of the persistence diagrams, and that when the Hilbert space is finite dimensional, finding a bi-Lipschitz embedding is impossible, even when restricting the persistence diagrams to have bounded cardinalities.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10425Convex Polygons in Cartesian Products
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10426
We study several problems concerning convex polygons whose vertices lie in a Cartesian product of two sets of n real numbers (for short, grid). First, we prove that every such grid contains a convex polygon with Omega(log n) vertices and that this bound is tight up to a constant factor. We generalize this result to d dimensions (for a fixed d in N), and obtain a tight lower bound of Omega(log^{d-1}n) for the maximum number of points in convex position in a d-dimensional grid. Second, we present polynomial-time algorithms for computing the longest convex polygonal chain in a grid that contains no two points with the same x- or y-coordinate. We show that the maximum size of such a convex polygon can be efficiently approximated up to a factor of 2. Finally, we present exponential bounds on the maximum number of convex polygons in these grids, and for some restricted variants. These bounds are tight up to polynomial factors.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10426Smallest k-Enclosing Rectangle Revisited
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10427
Given a set of n points in the plane, and a parameter k, we consider the problem of computing the minimum (perimeter or area) axis-aligned rectangle enclosing k points. We present the first near quadratic time algorithm for this problem, improving over the previous near-O(n^{5/2})-time algorithm by Kaplan et al. [Haim Kaplan et al., 2017]. We provide an almost matching conditional lower bound, under the assumption that (min,+)-convolution cannot be solved in truly subquadratic time. Furthermore, we present a new reduction (for either perimeter or area) that can make the time bound sensitive to k, giving near O(n k) time. We also present a near linear time (1+epsilon)-approximation algorithm to the minimum area of the optimal rectangle containing k points. In addition, we study related problems including the 3-sided, arbitrarily oriented, weighted, and subset sum versions of the problem.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10427Dynamic Geometric Data Structures via Shallow Cuttings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10428
We present new results on a number of fundamental problems about dynamic geometric data structures: 1) We describe the first fully dynamic data structures with sublinear amortized update time for maintaining (i) the number of vertices or the volume of the convex hull of a 3D point set, (ii) the largest empty circle for a 2D point set, (iii) the Hausdorff distance between two 2D point sets, (iv) the discrete 1-center of a 2D point set, (v) the number of maximal (i.e., skyline) points in a 3D point set. The update times are near n^{11/12} for (i) and (ii), n^{7/8} for (iii) and (iv), and n^{2/3} for (v). Previously, sublinear bounds were known only for restricted "semi-online" settings [Chan, SODA 2002]. 2) We slightly improve previous fully dynamic data structures for answering extreme point queries for the convex hull of a 3D point set and nearest neighbor search for a 2D point set. The query time is O(log^2n), and the amortized update time is O(log^4n) instead of O(log^5n) [Chan, SODA 2006; Kaplan et al., SODA 2017]. 3) We also improve previous fully dynamic data structures for maintaining the bichromatic closest pair between two 2D point sets and the diameter of a 2D point set. The amortized update time is O(log^4n) instead of O(log^7n) [Eppstein 1995; Chan, SODA 2006; Kaplan et al., SODA 2017].Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10428Lower Bounds for Electrical Reduction on Surfaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10429
We strengthen the connections between electrical transformations and homotopy from the planar setting - observed and studied since Steinitz - to arbitrary surfaces with punctures. As a result, we improve our earlier lower bound on the number of electrical transformations required to reduce an n-vertex graph on surface in the worst case [SOCG 2016] in two different directions. Our previous Omega(n^{3/2}) lower bound applies only to facial electrical transformations on plane graphs with no terminals. First we provide a stronger Omega(n^2) lower bound when the planar graph has two or more terminals, which follows from a quadratic lower bound on the number of homotopy moves in the annulus. Our second result extends our earlier Omega(n^{3/2}) lower bound to the wider class of planar electrical transformations, which preserve the planarity of the graph but may delete cycles that are not faces of the given embedding. This new lower bound follow from the observation that the defect of the medial graph of a planar graph is the same for all its planar embeddings.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10429Maintaining the Union of Unit Discs Under Insertions with Near-Optimal Overhead
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10430
We present efficient data structures for problems on unit discs and arcs of their boundary in the plane. (i) We give an output-sensitive algorithm for the dynamic maintenance of the union of n unit discs under insertions in O(k log^2 n) update time and O(n) space, where k is the combinatorial complexity of the structural change in the union due to the insertion of the new disc. (ii) As part of the solution of (i) we devise a fully dynamic data structure for the maintenance of lower envelopes of pseudo-lines, which we believe is of independent interest. The structure has O(log^2 n) update time and O(log n) vertical ray shooting query time. To achieve this performance, we devise a new algorithm for finding the intersection between two lower envelopes of pseudo-lines in O(log n) time, using tentative binary search; the lower envelopes are special in that at x=-infty any pseudo-line contributing to the first envelope lies below every pseudo-line contributing to the second envelope. (iii) We also present a dynamic range searching structure for a set of circular arcs of unit radius (not necessarily on the boundary of the union of the corresponding discs), where the ranges are unit discs, with O(n log n) preprocessing time, O(n^{1/2+epsilon} + l) query time and O(log^2 n) amortized update time, where l is the size of the output and for any epsilon>0. The structure requires O(n) storage space.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10430Almost Tight Lower Bounds for Hard Cutting Problems in Embedded Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10431
We prove essentially tight lower bounds, conditionally to the Exponential Time Hypothesis, for two fundamental but seemingly very different cutting problems on surface-embedded graphs: the Shortest Cut Graph problem and the Multiway Cut problem.A cut graph of a graph G embedded on a surface S is a subgraph of G whose removal from S leaves a disk. We consider the problem of deciding whether an unweighted graph embedded on a surface of genus g has a cut graph of length at most a given value. We prove a time lower bound for this problem of n^{Omega(g/log g)} conditionally to ETH. In other words, the first n^{O(g)}-time algorithm by Erickson and Har-Peled [SoCG 2002, Discr. Comput. Geom. 2004] is essentially optimal. We also prove that the problem is W[1]-hard when parameterized by the genus, answering a 17-year old question of these authors.A multiway cut of an undirected graph G with t distinguished vertices, called terminals, is a set of edges whose removal disconnects all pairs of terminals. We consider the problem of deciding whether an unweighted graph G has a multiway cut of weight at most a given value. We prove a time lower bound for this problem of n^{Omega(sqrt{gt + g^2}/log(gt))}, conditionally to ETH, for any choice of the genus g >=0 of the graph and the number of terminals t >=4. In other words, the algorithm by the second author [Algorithmica 2017] (for the more general multicut problem) is essentially optimal; this extends the lower bound by the third author [ICALP 2012] (for the planar case).Reductions to planar problems usually involve a grid-like structure. The main novel idea for our results is to understand what structures instead of grids are needed if we want to exploit optimally a certain value g of the genus.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10431
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10432
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10432Dual Circumference and Collinear Sets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10433
We show that, if an n-vertex triangulation T of maximum degree Delta has a dual that contains a cycle of length l, then T has a non-crossing straight-line drawing in which some set, called a collinear set, of Omega(l/Delta^4) vertices lie on a line. Using the current lower bounds on the length of longest cycles in 3-regular 3-connected graphs, this implies that every n-vertex planar graph of maximum degree Delta has a collinear set of size Omega(n^{0.8}/Delta^4). Very recently, Dujmovic et al. (SODA 2019) showed that, if S is a collinear set in a triangulation T then, for any point set X subset R^2 with |X|=|S|, T has a non-crossing straight-line drawing in which the vertices of S are drawn on the points in X. Because of this, collinear sets have numerous applications in graph drawing and related areas.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10433A Product Inequality for Extreme Distances
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10434
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10434Topological Data Analysis in Information Space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10435
Various kinds of data are routinely represented as discrete probability distributions. Examples include text documents summarized by histograms of word occurrences and images represented as histograms of oriented gradients. Viewing a discrete probability distribution as a point in the standard simplex of the appropriate dimension, we can understand collections of such objects in geometric and topological terms. Importantly, instead of using the standard Euclidean distance, we look into dissimilarity measures with information-theoretic justification, and we develop the theory needed for applying topological data analysis in this setting. In doing so, we emphasize constructions that enable the usage of existing computational topology software in this context.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10435Cubic Planar Graphs That Cannot Be Drawn On Few Lines
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10436
For every integer l, we construct a cubic 3-vertex-connected planar bipartite graph G with O(l^3) vertices such that there is no planar straight-line drawing of G whose vertices all lie on l lines. This strengthens previous results on graphs that cannot be drawn on few lines, which constructed significantly larger maximal planar graphs. We also find apex-trees and cubic bipartite series-parallel graphs that cannot be drawn on a bounded number of lines.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10436Counting Polygon Triangulations is Hard
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10437
We prove that it is #P-complete to count the triangulations of a (non-simple) polygon.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10437Topologically Trivial Closed Walks in Directed Surface Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10438
Let G be a directed graph with n vertices and m edges, embedded on a surface S, possibly with boundary, with first Betti number beta. We consider the complexity of finding closed directed walks in G that are either contractible (trivial in homotopy) or bounding (trivial in integer homology) in S. Specifically, we describe algorithms to determine whether G contains a simple contractible cycle in O(n+m) time, or a contractible closed walk in O(n+m) time, or a bounding closed walk in O(beta (n+m)) time. Our algorithms rely on subtle relationships between strong connectivity in G and in the dual graph G^*; our contractible-closed-walk algorithm also relies on a seminal topological result of Hass and Scott. We also prove that detecting simple bounding cycles is NP-hard.We also describe three polynomial-time algorithms to compute shortest contractible closed walks, depending on whether the fundamental group of the surface is free, abelian, or hyperbolic. A key step in our algorithm for hyperbolic surfaces is the construction of a context-free grammar with O(g^2L^2) non-terminals that generates all contractible closed walks of length at most L, and only contractible closed walks, in a system of quads of genus g >= 2. Finally, we show that computing shortest simple contractible cycles, shortest simple bounding cycles, and shortest bounding closed walks are all NP-hard.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10438Packing Disks into Disks with Optimal Worst-Case Density
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10439
We provide a tight result for a fundamental problem arising from packing disks into a circular container: The critical density of packing disks in a disk is 0.5. This implies that any set of (not necessarily equal) disks of total area delta <= 1/2 can always be packed into a disk of area 1; on the other hand, for any epsilon>0 there are sets of disks of area 1/2+epsilon that cannot be packed. The proof uses a careful manual analysis, complemented by a minor automatic part that is based on interval arithmetic. Beyond the basic mathematical importance, our result is also useful as a blackbox lemma for the analysis of recursive packing algorithms.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10439Semi-Algebraic Colorings of Complete Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10440
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10440Chunk Reduction for Multi-Parameter Persistent Homology
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10441
The extension of persistent homology to multi-parameter setups is an algorithmic challenge. Since most computation tasks scale badly with the size of the input complex, an important pre-processing step consists of simplifying the input while maintaining the homological information. We present an algorithm that drastically reduces the size of an input. Our approach is an extension of the chunk algorithm for persistent homology (Bauer et al., Topological Methods in Data Analysis and Visualization III, 2014). We show that our construction produces the smallest multi-filtered chain complex among all the complexes quasi-isomorphic to the input, improving on the guarantees of previous work in the context of discrete Morse theory. Our algorithm also offers an immediate parallelization scheme in shared memory. Already its sequential version compares favorably with existing simplification schemes, as we show by experimental evaluation.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10441The Crossing Tverberg Theorem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10442
The Tverberg theorem is one of the cornerstones of discrete geometry. It states that, given a set X of at least (d+1)(r-1)+1 points in R^d, one can find a partition X=X_1 cup ... cup X_r of X, such that the convex hulls of the X_i, i=1,...,r, all share a common point. In this paper, we prove a strengthening of this theorem that guarantees a partition which, in addition to the above, has the property that the boundaries of full-dimensional convex hulls have pairwise nonempty intersections. Possible generalizations and algorithmic aspects are also discussed. As a concrete application, we show that any n points in the plane in general position span floor[n/3] vertex-disjoint triangles that are pairwise crossing, meaning that their boundaries have pairwise nonempty intersections; this number is clearly best possible. A previous result of Alvarez-Rebollar et al. guarantees floor[n/6] pairwise crossing triangles. Our result generalizes to a result about simplices in R^d,d >=2.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10442Z_2-Genus of Graphs and Minimum Rank of Partial Symmetric Matrices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10443
The genus g(G) of a graph G is the minimum g such that G has an embedding on the orientable surface M_g of genus g. A drawing of a graph on a surface is independently even if every pair of nonadjacent edges in the drawing crosses an even number of times. The Z_2-genus of a graph G, denoted by g_0(G), is the minimum g such that G has an independently even drawing on M_g. By a result of Battle, Harary, Kodama and Youngs from 1962, the graph genus is additive over 2-connected blocks. In 2013, Schaefer and Stefankovic proved that the Z_2-genus of a graph is additive over 2-connected blocks as well, and asked whether this result can be extended to so-called 2-amalgamations, as an analogue of results by Decker, Glover, Huneke, and Stahl for the genus. We give the following partial answer. If G=G_1 cup G_2, G_1 and G_2 intersect in two vertices u and v, and G-u-v has k connected components (among which we count the edge uv if present), then |g_0(G)-(g_0(G_1)+g_0(G_2))|<=k+1. For complete bipartite graphs K_{m,n}, with n >= m >= 3, we prove that g_0(K_{m,n})/g(K_{m,n})=1-O(1/n). Similar results are proved also for the Euler Z_2-genus.We express the Z_2-genus of a graph using the minimum rank of partial symmetric matrices over Z_2; a problem that might be of independent interest.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10443An Experimental Study of Forbidden Patterns in Geometric Permutations by Combinatorial Lifting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10444
We study the problem of deciding if a given triple of permutations can be realized as geometric permutations of disjoint convex sets in R^3. We show that this question, which is equivalent to deciding the emptiness of certain semi-algebraic sets bounded by cubic polynomials, can be "lifted" to a purely combinatorial problem. We propose an effective algorithm for that problem, and use it to gain new insights into the structure of geometric permutations.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10444Journey to the Center of the Point Set
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10445
We revisit an algorithm of Clarkson et al. [K. L. Clarkson et al., 1996], that computes (roughly) a 1/(4d^2)-centerpoint in O~(d^9) time, for a point set in R^d, where O~ hides polylogarithmic terms. We present an improved algorithm that computes (roughly) a 1/d^2-centerpoint with running time O~(d^7). While the improvements are (arguably) mild, it is the first progress on this well known problem in over twenty years. The new algorithm is simpler, and the running time bound follows by a simple random walk argument, which we believe to be of independent interest. We also present several new applications of the improved centerpoint algorithm.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10445Preprocessing Ambiguous Imprecise Points
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10446
Let R = {R_1, R_2, ..., R_n} be a set of regions and let X = {x_1, x_2, ..., x_n} be an (unknown) point set with x_i in R_i. Region R_i represents the uncertainty region of x_i. We consider the following question: how fast can we establish order if we are allowed to preprocess the regions in R? The preprocessing model of uncertainty uses two consecutive phases: a preprocessing phase which has access only to R followed by a reconstruction phase during which a desired structure on X is computed. Recent results in this model parametrize the reconstruction time by the ply of R, which is the maximum overlap between the regions in R. We introduce the ambiguity A(R) as a more fine-grained measure of the degree of overlap in R. We show how to preprocess a set of d-dimensional disks in O(n log n) time such that we can sort X (if d=1) and reconstruct a quadtree on X (if d >= 1 but constant) in O(A(R)) time. If A(R) is sub-linear, then reporting the result dominates the running time of the reconstruction phase. However, we can still return a suitable data structure representing the result in O(A(R)) time. In one dimension, {R} is a set of intervals and the ambiguity is linked to interval entropy, which in turn relates to the well-studied problem of sorting under partial information. The number of comparisons necessary to find the linear order underlying a poset P is lower-bounded by the graph entropy of P. We show that if P is an interval order, then the ambiguity provides a constant-factor approximation of the graph entropy. This gives a lower bound of Omega(A(R)) in all dimensions for the reconstruction phase (sorting or any proximity structure), independent of any preprocessing; hence our result is tight. Finally, our results imply that one can approximate the entropy of interval graphs in O(n log n) time, improving the O(n^{2.5}) bound by Cardinal et al.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10446Rods and Rings: Soft Subdivision Planner for R^3 x S^2
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10447
We consider path planning for a rigid spatial robot moving amidst polyhedral obstacles. Our robot is either a rod or a ring. Being axially-symmetric, their configuration space is R^3 x S^2 with 5 degrees of freedom (DOF). Correct, complete and practical path planning for such robots is a long standing challenge in robotics. While the rod is one of the most widely studied spatial robots in path planning, the ring seems to be new, and a rare example of a non-simply-connected robot. This work provides rigorous and complete algorithms for these robots with theoretical guarantees. We implemented the algorithms in our open-source Core Library. Experiments show that they are practical, achieving near real-time performance. We compared our planner to state-of-the-art sampling planners in OMPL [Sucan et al., 2012].Our subdivision path planner is based on the twin foundations of epsilon-exactness and soft predicates. Correct implementation is relatively easy. The technical innovations include subdivision atlases for S^2, introduction of Sigma_2 representations for footprints, and extensions of our feature-based technique for "opening up the blackbox of collision detection".Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=104473-Manifold Triangulations with Small Treewidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10448
Motivated by fixed-parameter tractable (FPT) problems in computational topology, we consider the treewidth tw(M) of a compact, connected 3-manifold M, defined to be the minimum treewidth of the face pairing graph of any triangulation T of M. In this setting the relationship between the topology of a 3-manifold and its treewidth is of particular interest. First, as a corollary of work of Jaco and Rubinstein, we prove that for any closed, orientable 3-manifold M the treewidth tw(M) is at most 4g(M)-2, where g(M) denotes Heegaard genus of M. In combination with our earlier work with Wagner, this yields that for non-Haken manifolds the Heegaard genus and the treewidth are within a constant factor.Second, we characterize all 3-manifolds of treewidth one: These are precisely the lens spaces and a single other Seifert fibered space. Furthermore, we show that all remaining orientable Seifert fibered spaces over the 2-sphere or a non-orientable surface have treewidth two. In particular, for every spherical 3-manifold we exhibit a triangulation of treewidth at most two.Our results further validate the parameter of treewidth (and other related parameters such as cutwidth or congestion) to be useful for topological computing, and also shed more light on the scope of existing FPT-algorithms in the field.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10448Algorithms for Metric Learning via Contrastive Embeddings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10449
We study the problem of supervised learning a metric space under discriminative constraints. Given a universe X and sets S, D subset binom{X}{2} of similar and dissimilar pairs, we seek to find a mapping f:X -> Y, into some target metric space M=(Y,rho), such that similar objects are mapped to points at distance at most u, and dissimilar objects are mapped to points at distance at least l. More generally, the goal is to find a mapping of maximum accuracy (that is, fraction of correctly classified pairs). We propose approximation algorithms for various versions of this problem, for the cases of Euclidean and tree metric spaces. For both of these target spaces, we obtain fully polynomial-time approximation schemes (FPTAS) for the case of perfect information. In the presence of imperfect information we present approximation algorithms that run in quasi-polynomial time (QPTAS). We also present an exact algorithm for learning line metric spaces with perfect information in polynomial time. Our algorithms use a combination of tools from metric embeddings and graph partitioning, that could be of independent interest.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10449Exact Computation of the Matching Distance on 2-Parameter Persistence Modules
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10450
The matching distance is a pseudometric on multi-parameter persistence modules, defined in terms of the weighted bottleneck distance on the restriction of the modules to affine lines. It is known that this distance is stable in a reasonable sense, and can be efficiently approximated, which makes it a promising tool for practical applications. In this work, we show that in the 2-parameter setting, the matching distance can be computed exactly in polynomial time. Our approach subdivides the space of affine lines into regions, via a line arrangement. In each region, the matching distance restricts to a simple analytic function, whose maximum is easily computed. As a byproduct, our analysis establishes that the matching distance is a rational number, if the bigrades of the input modules are rational.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10450Probabilistic Smallest Enclosing Ball in High Dimensions via Subgradient Sampling
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10451
We study a variant of the median problem for a collection of point sets in high dimensions. This generalizes the geometric median as well as the (probabilistic) smallest enclosing ball (pSEB) problems. Our main objective and motivation is to improve the previously best algorithm for the pSEB problem by reducing its exponential dependence on the dimension to linear. This is achieved via a novel combination of sampling techniques for clustering problems in metric spaces with the framework of stochastic subgradient descent. As a result, the algorithm becomes applicable to shape fitting problems in Hilbert spaces of unbounded dimension via kernel functions. We present an exemplary application by extending the support vector data description (SVDD) shape fitting method to the probabilistic case. This is done by simulating the pSEB algorithm implicitly in the feature space induced by the kernel function.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10451A Weighted Approach to the Maximum Cardinality Bipartite Matching Problem with Applications in Geometric Settings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10452
We present a weighted approach to compute a maximum cardinality matching in an arbitrary bipartite graph. Our main result is a new algorithm that takes as input a weighted bipartite graph G(A cup B,E) with edge weights of 0 or 1. Let w <= n be an upper bound on the weight of any matching in G. Consider the subgraph induced by all the edges of G with a weight 0. Suppose every connected component in this subgraph has O(r) vertices and O(mr/n) edges. We present an algorithm to compute a maximum cardinality matching in G in O~(m(sqrt{w} + sqrt{r} + wr/n)) time. When all the edge weights are 1 (symmetrically when all weights are 0), our algorithm will be identical to the well-known Hopcroft-Karp (HK) algorithm, which runs in O(m sqrt{n}) time. However, if we can carefully assign weights of 0 and 1 on its edges such that both w and r are sub-linear in n and wr=O(n^{gamma}) for gamma < 3/2, then we can compute maximum cardinality matching in G in o(m sqrt{n}) time. Using our algorithm, we obtain a new O~(n^{4/3}/epsilon^4) time algorithm to compute an epsilon-approximate bottleneck matching of A,B subsetR^2 and an 1/(epsilon^{O(d)}}n^{1+(d-1)/(2d-1)}) poly log n time algorithm for computing epsilon-approximate bottleneck matching in d-dimensions. All previous algorithms take Omega(n^{3/2}) time. Given any graph G(A cup B,E) that has an easily computable balanced vertex separator for every subgraph G'(V',E') of size |V'|^{delta}, for delta in [1/2,1), we can apply our algorithm to compute a maximum matching in O~(mn^{delta/1+delta}) time improving upon the O(m sqrt{n}) time taken by the HK-Algorithm.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10452The Unbearable Hardness of Unknotting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10453
We prove that deciding if a diagram of the unknot can be untangled using at most k Reidemeister moves (where k is part of the input) is NP-hard. We also prove that several natural questions regarding links in the 3-sphere are NP-hard, including detecting whether a link contains a trivial sublink with n components, computing the unlinking number of a link, and computing a variety of link invariants related to four-dimensional topology (such as the 4-ball Euler characteristic, the slicing number, and the 4-dimensional clasp number).Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10453On Grids in Point-Line Arrangements in the Plane
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10454
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10454On Weak epsilon-Nets and the Radon Number
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10455
We show that the Radon number characterizes the existence of weak nets in separable convexity spaces (an abstraction of the Euclidean notion of convexity). The construction of weak nets when the Radon number is finite is based on Helly's property and on metric properties of VC classes. The lower bound on the size of weak nets when the Radon number is large relies on the chromatic number of the Kneser graph. As an application, we prove an amplification result for weak epsilon-nets.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10455Dynamic Planar Point Location in External Memory
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10456
In this paper we describe a fully-dynamic data structure for the planar point location problem in the external memory model. Our data structure supports queries in O(log_B n(log log_B n)^3)) I/Os and updates in O(log_B n(log log_B n)^2)) amortized I/Os, where n is the number of segments in the subdivision and B is the block size. This is the first dynamic data structure with almost-optimal query cost. For comparison all previously known results for this problem require O(log_B^2 n) I/Os to answer queries. Our result almost matches the best known upper bound in the internal-memory model.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10456Efficient Algorithms for Ortho-Radial Graph Drawing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10457
Orthogonal drawings, i.e., embeddings of graphs into grids, are a classic topic in Graph Drawing. Often the goal is to find a drawing that minimizes the number of bends on the edges. A key ingredient for bend minimization algorithms is the existence of an orthogonal representation that allows to describe such drawings purely combinatorially by only listing the angles between the edges around each vertex and the directions of bends on the edges, but neglecting any kind of geometric information such as vertex coordinates or edge lengths.Barth et al. [2017] have established the existence of an analogous ortho-radial representation for ortho-radial drawings, which are embeddings into an ortho-radial grid, whose gridlines are concentric circles around the origin and straight-line spokes emanating from the origin but excluding the origin itself. While any orthogonal representation admits an orthogonal drawing, it is the circularity of the ortho-radial grid that makes the problem of characterizing valid ortho-radial representations all the more complex and interesting. Barth et al. prove such a characterization. However, the proof is existential and does not provide an efficient algorithm for testing whether a given ortho-radial representation is valid, let alone actually obtaining a drawing from an ortho-radial representation.In this paper we give quadratic-time algorithms for both of these tasks. They are based on a suitably constrained left-first DFS in planar graphs and several new insights on ortho-radial representations. Our validity check requires quadratic time, and a naive application of it would yield a quartic algorithm for constructing a drawing from a valid ortho-radial representation. Using further structural insights we speed up the drawing algorithm to quadratic running time.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10457On the Chromatic Number of Disjointness Graphs of Curves
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10458
Let omega(G) and chi(G) denote the clique number and chromatic number of a graph G, respectively. The disjointness graph of a family of curves (continuous arcs in the plane) is the graph whose vertices correspond to the curves and in which two vertices are joined by an edge if and only if the corresponding curves are disjoint. A curve is called x-monotone if every vertical line intersects it in at most one point. An x-monotone curve is grounded if its left endpoint lies on the y-axis.We prove that if G is the disjointness graph of a family of grounded x-monotone curves such that omega(G)=k, then chi(G)<= binom{k+1}{2}. If we only require that every curve is x-monotone and intersects the y-axis, then we have chi(G)<= k+1/2 binom{k+2}{3}. Both of these bounds are best possible. The construction showing the tightness of the last result settles a 25 years old problem: it yields that there exist K_k-free disjointness graphs of x-monotone curves such that any proper coloring of them uses at least Omega(k^{4}) colors. This matches the upper bound up to a constant factor.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10458Computing Persistent Homology of Flag Complexes via Strong Collapses
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10459
In this article, we focus on the problem of computing Persistent Homology of a flag tower, i.e. a sequence of flag complexes connected by simplicial maps. We show that if we restrict the class of simplicial complexes to flag complexes, we can achieve decisive improvement in terms of time and space complexities with respect to previous work. We show that strong collapses of flag complexes can be computed in time O(k^2v^2) where v is the number of vertices of the complex and k is the maximal degree of its graph. Moreover we can strong collapse a flag complex knowing only its 1-skeleton and the resulting complex is also a flag complex. When we strong collapse the complexes in a flag tower, we obtain a reduced sequence that is also a flag tower we call the core flag tower. We then convert the core flag tower to an equivalent filtration to compute its PH. Here again, we only use the 1-skeletons of the complexes. The resulting method is simple and extremely efficient.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10459Ham-Sandwich Cuts and Center Transversals in Subspaces
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10460
The Ham-Sandwich theorem is a well-known result in geometry. It states that any d mass distributions in R^d can be simultaneously bisected by a hyperplane. The result is tight, that is, there are examples of d+1 mass distributions that cannot be simultaneously bisected by a single hyperplane. In this abstract we will study the following question: given a continuous assignment of mass distributions to certain subsets of R^d, is there a subset on which we can bisect more masses than what is guaranteed by the Ham-Sandwich theorem?We investigate two types of subsets. The first type are linear subspaces of R^d, i.e., k-dimensional flats containing the origin. We show that for any continuous assignment of d mass distributions to the k-dimensional linear subspaces of R^d, there is always a subspace on which we can simultaneously bisect the images of all d assignments. We extend this result to center transversals, a generalization of Ham-Sandwich cuts. As for Ham-Sandwich cuts, we further show that for d-k+2 masses, we can choose k-1 of the vectors defining the k-dimensional subspace in which the solution lies.The second type of subsets we consider are subsets that are determined by families of n hyperplanes in R^d. Also in this case, we find a Ham-Sandwich-type result. In an attempt to solve a conjecture by Langerman about bisections with several cuts, we show that our underlying topological result can be used to prove this conjecture in a relaxed setting.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10460Distribution-Sensitive Bounds on Relative Approximations of Geometric Ranges
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10461
A family R of ranges and a set X of points, all in R^d, together define a range space (X, R|_X), where R|_X = {X cap h | h in R}. We want to find a structure to estimate the quantity |X cap h|/|X| for any range h in R with the (rho, epsilon)-guarantee: (i) if |X cap h|/|X| > rho, the estimate must have a relative error epsilon; (ii) otherwise, the estimate must have an absolute error rho epsilon. The objective is to minimize the size of the structure. Currently, the dominant solution is to compute a relative (rho, epsilon)-approximation, which is a subset of X with O~(lambda/(rho epsilon^2)) points, where lambda is the VC-dimension of (X, R|_X), and O~ hides polylog factors.This paper shows a more general bound sensitive to the content of X. We give a structure that stores O(log (1/rho)) integers plus O~(theta * (lambda/epsilon^2)) points of X, where theta - called the disagreement coefficient - measures how much the ranges differ from each other in their intersections with X. The value of theta is between 1 and 1/rho, such that our space bound is never worse than that of relative (rho, epsilon)-approximations, but we improve the latter's 1/rho term whenever theta = o(1/(rho log (1/rho))). We also prove that, in the worst case, summaries with the (rho, 1/2)-guarantee must consume Omega(theta) words even for d = 2 and lambda <=3.We then constrain R to be the set of halfspaces in R^d for a constant d, and prove the existence of structures with o(1/(rho epsilon^2)) size offering (rho,epsilon)-guarantees, when X is generated from various stochastic distributions. This is the first formal justification on why the term 1/rho is not compulsory for "realistic" inputs.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10461DTM-Based Filtrations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10462
Despite strong stability properties, the persistent homology of filtrations classically used in Topological Data Analysis, such as, e.g. the Cech or Vietoris-Rips filtrations, are very sensitive to the presence of outliers in the data from which they are computed. In this paper, we introduce and study a new family of filtrations, the DTM-filtrations, built on top of point clouds in the Euclidean space which are more robust to noise and outliers. The approach adopted in this work relies on the notion of distance-to-measure functions and extends some previous work on the approximation of such functions.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10462A Divide-and-Conquer Algorithm for Two-Point L_1 Shortest Path Queries in Polygonal Domains
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10463
Let P be a polygonal domain of h holes and n vertices. We study the problem of constructing a data structure that can compute a shortest path between s and t in P under the L_1 metric for any two query points s and t. To do so, a standard approach is to first find a set of n_s "gateways" for s and a set of n_t "gateways" for t such that there exist a shortest s-t path containing a gateway of s and a gateway of t, and then compute a shortest s-t path using these gateways. Previous algorithms all take quadratic O(n_s * n_t) time to solve this problem. In this paper, we propose a divide-and-conquer technique that solves the problem in O(n_s + n_t log n_s) time. As a consequence, we construct a data structure of O(n+(h^2 log^3 h/log log h)) size in O(n+(h^2 log^4 h/log log h)) time such that each query can be answered in O(log n) time.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10463Near-Optimal Algorithms for Shortest Paths in Weighted Unit-Disk Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10464
We revisit a classical graph-theoretic problem, the single-source shortest-path (SSSP) problem, in weighted unit-disk graphs. We first propose an exact (and deterministic) algorithm which solves the problem in O(n log^2 n) time using linear space, where n is the number of the vertices of the graph. This significantly improves the previous deterministic algorithm by Cabello and Jejcic [CGTA'15] which uses O(n^{1+delta}) time and O(n^{1+delta}) space (for any small constant delta>0) and the previous randomized algorithm by Kaplan et al. [SODA'17] which uses O(n log^{12+o(1)} n) expected time and O(n log^3 n) space. More specifically, we show that if the 2D offline insertion-only (additively-)weighted nearest-neighbor problem with k operations (i.e., insertions and queries) can be solved in f(k) time, then the SSSP problem in weighted unit-disk graphs can be solved in O(n log n+f(n)) time. Using the same framework with some new ideas, we also obtain a (1+epsilon)-approximate algorithm for the problem, using O(n log n + n log^2(1/epsilon)) time and linear space. This improves the previous (1+epsilon)-approximate algorithm by Chan and Skrepetos [SoCG'18] which uses O((1/epsilon)^2 n log n) time and O((1/epsilon)^2 n) space. Because of the Omega(n log n)-time lower bound of the problem (even when approximation is allowed), both of our algorithms are almost optimal.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10464Searching for the Closest-Pair in a Query Translate
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10465
We consider a range-search variant of the closest-pair problem. Let Gamma be a fixed shape in the plane. We are interested in storing a given set of n points in the plane in some data structure such that for any specified translate of Gamma, the closest pair of points contained in the translate can be reported efficiently. We present results on this problem for two important settings: when Gamma is a polygon (possibly with holes) and when Gamma is a general convex body whose boundary is smooth. When Gamma is a polygon, we present a data structure using O(n) space and O(log n) query time, which is asymptotically optimal. When Gamma is a general convex body with a smooth boundary, we give a near-optimal data structure using O(n log n) space and O(log^2 n) query time. Our results settle some open questions posed by Xue et al. at SoCG 2018.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10465On the Complexity of the k-Level in Arrangements of Pseudoplanes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10466
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10466Packing Geometric Objects with Optimal Worst-Case Density (Multimedia Exposition)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10467
We motivate and visualize problems and methods for packing a set of objects into a given container, in particular a set of {different-size} circles or squares into a square or circular container. Questions of this type have attracted a considerable amount of attention and are known to be notoriously hard. We focus on a particularly simple criterion for deciding whether a set can be packed: comparing the total area A of all objects to the area C of the container. The critical packing density delta^* is the largest value A/C for which any set of area A can be packed into a container of area C. We describe algorithms that establish the critical density of squares in a square (delta^*=0.5), of circles in a square (delta^*=0.5390 ...), regular octagons in a square (delta^*=0.5685 ...), and circles in a circle (delta^*=0.5).Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10467Properties of Minimal-Perimeter Polyominoes (Multimedia Exposition)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10468
In this video, we survey some results concerning polyominoes, which are sets of connected cells on the square lattice, and specifically, minimal-perimeter polyominoes, that are polyominoes with the minimal-perimeter from all polyominoes of the same size.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10468A Manual Comparison of Convex Hull Algorithms (Multimedia Exposition)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10469
We have verified experimentally that there is at least one point set on which Andrew's algorithm (based on Graham's scan) to compute the convex hull of a set of points in the plane is significantly faster than a brute-force approach, thus supporting existing theoretical analysis with practical evidence. Specifically, we determined that executing Andrew's algorithm on the point set P = {(1,4), (2,8), (3,10), (4,1), (5,7), (6,3), (7,9), (8,5), (9,2), (10,6)} takes 41 minutes and 18 seconds; the brute-force approach takes 3 hours, 49 minutes, and 5 seconds.Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10469
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10470
Mon, 17 Jun 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10470LIPIcs, Volume 135, TQC'19, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10505
LIPIcs, Volume 135, TQC'19, Complete VolumeWed, 05 Jun 2019 12:03:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10505OASIcs, Volume 70, LDK'19, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10504
OASIcs, Volume 70, LDK'19, Complete VolumeWed, 05 Jun 2019 11:25:21 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10504Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10392
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10392On Quantum Chosen-Ciphertext Attacks and Learning with Errors
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10393
Quantum computing is a significant threat to classical public-key cryptography. In strong "quantum access" security models, numerous symmetric-key cryptosystems are also vulnerable. We consider classical encryption in a model which grants the adversary quantum oracle access to encryption and decryption, but where the latter is restricted to non-adaptive (i.e., pre-challenge) queries only. We define this model formally using appropriate notions of ciphertext indistinguishability and semantic security (which are equivalent by standard arguments) and call it QCCA1 in analogy to the classical CCA1 security model. Using a bound on quantum random-access codes, we show that the standard PRF-based encryption schemes are QCCA1-secure when instantiated with quantum-secure primitives.We then revisit standard IND-CPA-secure Learning with Errors (LWE) encryption and show that leaking just one quantum decryption query (and no other queries or leakage of any kind) allows the adversary to recover the full secret key with constant success probability. In the classical setting, by contrast, recovering the key requires a linear number of decryption queries. The algorithm at the core of our attack is a (large-modulus version of) the well-known Bernstein-Vazirani algorithm. We emphasize that our results should not be interpreted as a weakness of these cryptosystems in their stated security setting (i.e., post-quantum chosen-plaintext secrecy). Rather, our results mean that, if these cryptosystems are exposed to chosen-ciphertext attacks (e.g., as a result of deployment in an inappropriate real-world setting) then quantum attacks are even more devastating than classical ones.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10393Quantum Distinguishing Complexity, Zero-Error Algorithms, and Statistical Zero Knowledge
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10394
We define a new query measure we call quantum distinguishing complexity, denoted QD(f) for a Boolean function f. Unlike a quantum query algorithm, which must output a state close to |0> on a 0-input and a state close to |1> on a 1-input, a "quantum distinguishing algorithm" can output any state, as long as the output states for any 0-input and 1-input are distinguishable. Using this measure, we establish a new relationship in query complexity: For all total functions f, Q_0(f)=O~(Q(f)^5), where Q_0(f) and Q(f) denote the zero-error and bounded-error quantum query complexity of f respectively, improving on the previously known sixth power relationship.We also define a query measure based on quantum statistical zero-knowledge proofs, QSZK(f), which is at most Q(f). We show that QD(f) in fact lower bounds QSZK(f) and not just Q(f). QD(f) also upper bounds the (positive-weights) adversary bound, which yields the following relationships for all f: Q(f) >= QSZK(f) >= QD(f) = Omega(Adv(f)). This sheds some light on why the adversary bound proves suboptimal bounds for problems like Collision and Set Equality, which have low QSZK complexity.Lastly, we show implications for lifting theorems in communication complexity. We show that a general lifting theorem for either zero-error quantum query complexity or for QSZK would imply a general lifting theorem for bounded-error quantum query complexity.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10394Circuit Transformations for Quantum Architectures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10395
Quantum computer architectures impose restrictions on qubit interactions. We propose efficient circuit transformations that modify a given quantum circuit to fit an architecture, allowing for any initial and final mapping of circuit qubits to architecture qubits. To achieve this, we first consider the qubit movement subproblem and use the ROUTING VIA MATCHINGS framework to prove tighter bounds on parallel routing. In practice, we only need to perform partial permutations, so we generalize ROUTING VIA MATCHINGS to that setting. We give new routing procedures for common architecture graphs and for the generalized hierarchical product of graphs, which produces subgraphs of the Cartesian product. Secondly, for serial routing, we consider the TOKEN SWAPPING framework and extend a 4-approximation algorithm for general graphs to support partial permutations. We apply these routing procedures to give several circuit transformations, using various heuristic qubit placement subroutines. We implement these transformations in software and compare their performance for large quantum circuits on grid and modular architectures, identifying strategies that work well in practice.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10395The RGB No-Signalling Game
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10396
Introducing the simplest of all No-Signalling Games: the RGB Game where two verifiers interrogate two provers, Alice and Bob, far enough from each other that communication between them is too slow to be possible. Each prover may be independently queried one of three possible colours: Red, Green or Blue. Let a be the colour announced to Alice and b be announced to Bob. To win the game they must reply colours x (resp. y) such that a != x != y != b.This work focuses on this new game mainly as a pedagogical tool for its simplicity but also because it triggered us to introduce a new set of definitions for reductions among multi-party probability distributions and related non-locality classes. We show that a particular winning strategy for the RGB Game is equivalent to the PR-Box of Popescu-Rohrlich and thus No-Signalling. Moreover, we use this example to define No-Signalling in a new useful way, as the intersection of two natural classes of multi-party probability distributions called one-way signalling. We exhibit a quantum strategy able to beat the classical local maximum winning probability of 8/9 shifting it up to 11/12. Optimality of this quantum strategy is demonstrated using the standard tool of semidefinite programming.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10396On the Qubit Routing Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10397
We introduce a new architecture-agnostic methodology for mapping abstract quantum circuits to realistic quantum computing devices with restricted qubit connectivity, as implemented by Cambridge Quantum Computing's t|ket> compiler. We present empirical results showing the effectiveness of this method in terms of reducing two-qubit gate depth and two-qubit gate count, compared to other implementations.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10397Applications of the Quantum Algorithm for st-Connectivity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10398
We present quantum algorithms for various problems related to graph connectivity. We give simple and query-optimal algorithms for cycle detection and odd-length cycle detection (bipartiteness) using a reduction to st-connectivity. Furthermore, we show that our algorithm for cycle detection has improved performance under the promise of large circuit rank or a small number of edges. We also provide algorithms for detecting even-length cycles and for estimating the circuit rank of a graph. All of our algorithms have logarithmic space complexity.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10398Bayesian ACRONYM Tuning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10399
We provide an algorithm that uses Bayesian randomized benchmarking in concert with a local optimizer, such as SPSA, to find a set of controls that optimizes that average gate fidelity. We call this method Bayesian ACRONYM tuning as a reference to the analogous ACRONYM tuning algorithm. Bayesian ACRONYM distinguishes itself in its ability to retain prior information from experiments that use nearby control parameters; whereas traditional ACRONYM tuning does not use such information and can require many more measurements as a result. We prove that such information reuse is possible under the relatively weak assumption that the true model parameters are Lipschitz-continuous functions of the control parameters. We also perform numerical experiments that demonstrate that over-rotation errors in single qubit gates can be automatically tuned from 88% to 99.95% average gate fidelity using less than 1kB of data and fewer than 20 steps of the optimizer.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10399A Compressed Classical Description of Quantum States
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10400
We show how to approximately represent a quantum state using the square root of the usual amount of classical memory. The classical representation of an n-qubit state psi consists of its inner products with O(sqrt{2^n}) stabilizer states. A quantum state initially specified by its 2^n entries in the computational basis can be compressed to this form in time O(2^n poly(n)), and, subsequently, the compressed description can be used to additively approximate the expectation value of an arbitrary observable. Our compression scheme directly gives a new protocol for the vector in subspace problem with randomized one-way communication complexity that matches (up to polylogarithmic factors) the optimal upper bound, due to Raz. We obtain an exponential improvement over Raz's protocol in terms of computational efficiency.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10400Approximate Unitary n^{2/3}-Designs Give Rise to Quantum Channels with Super Additive Classical Holevo Capacity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10401
In a breakthrough, Hastings [2009] showed that there exist quantum channels whose classical Holevo capacity is superadditive i.e. more classical information can be transmitted by quantum encoding strategies entangled across multiple channel uses as compared to unentangled quantum encoding strategies. Hastings' proof used Haar random unitaries to exhibit superadditivity. In this paper we show that a unitary chosen uniformly at random from an approximate n^{2/3}-design gives rise to a quantum channel with superadditive classical Holevo capacity, where n is the dimension of the unitary exhibiting the Stinespring dilation of the channel superoperator. We do so by showing that the minimum output von Neumann entropy of a quantum channel arising from an approximate unitary n^{2/3}-design is subadditive, which by Shor's work [2002] implies superadditivity of classical Holevo capacity of quantum channels.We follow the geometric functional analytic approach of Aubrun, Szarek and Werner [Aubrun et al., 2010] in order to prove our result. More precisely we prove a sharp Dvoretzky-like theorem stating that, with high probability under the choice of a unitary from an approximate t-design, random subspaces of large dimension make a Lipschitz function take almost constant value. Such theorems were known earlier only for Haar random unitaries. We obtain our result by appealing to Low's technique [2009] for proving concentration of measure for an approximate t-design, combined with a stratified analysis of the variational behaviour of Lipschitz functions on the unit sphere in high dimension. The stratified analysis is the main technical advance of this work.Haar random unitaries require at least Omega(n^2) random bits in order to describe them with good precision. In contrast, there exist exact n^{2/3}-designs using only O(n^{2/3} log n) random bits [Kuperberg, 2006]. Thus, our work can be viewed as a partial derandomisation of Hastings' result, and a step towards the quest of finding an explicit quantum channel with superadditive classical Holevo capacity.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10401Parameterization of Tensor Network Contraction
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10402
We present a conceptually clear and algorithmically useful framework for parameterizing the costs of tensor network contraction. Our framework is completely general, applying to tensor networks with arbitrary bond dimensions, open legs, and hyperedges. The fundamental objects of our framework are rooted and unrooted contraction trees, which represent classes of contraction orders. Properties of a contraction tree correspond directly and precisely to the time and space costs of tensor network contraction. The properties of rooted contraction trees give the costs of parallelized contraction algorithms. We show how contraction trees relate to existing tree-like objects in the graph theory literature, bringing to bear a wide range of graph algorithms and tools to tensor network contraction. Independent of tensor networks, we show that the edge congestion of a graph is almost equal to the branchwidth of its line graph.Fri, 31 May 2019 00:00:00 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10402