Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082018-06-06http://www.dagstuhl.de/dagpub-rss.phpLIPIcs, Volume 101, SWAT'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8932
LIPIcs, Volume 101, SWAT'18, Complete VolumeThu, 21 Jun 2018 11:54:54 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8932Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8976
Front Matter, Table of Contents, Preface, Conference OrganizationWed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8976On Strong and Weak Sustainability, with an Application to Self-Suspending Real-Time Tasks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8977
Motivated by an apparent contradiction regarding whether certain scheduling policies are sustainable, we revisit the topic of sustainability in real-time scheduling and argue that the existing definitions of sustainability should be further clarified and generalized. After proposing a formal, generic sustainability theory, we relax the existing notion of (strongly) sustainable scheduling policy to provide a new classification called weak sustainability. Proving weak sustainability properties allows reducing the number of variables that must be considered in the search of a worst-case schedule, and hence enables more efficient schedulability analyses and testing regimes even for policies that are not (strongly) sustainable. As a proof of concept, and to better understand a model for which many mistakes were found in the literature, we study weak sustainability in the context of dynamic self-suspending tasks, where we formalize a generic suspension model using the Coq proof assistant and provide a machine-checked proof that any JLFP scheduling policy is weakly sustainable with respect to job costs and variable suspension times.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8977Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8978
During the past decade, parallelism-related issues have been at the forefront of real-time systems research due to the advent of multicore technologies. In the coming years, such issues will loom ever larger due to increasing core counts. Having more cores means a greater potential exists for platform capacity loss when the available parallelism cannot be fully exploited. In this paper, such capacity loss is considered in the context of real-time locking protocols. In this context, lock nesting becomes a key concern as it can result in transitive blocking chains that force tasks to execute sequentially unnecessarily. Such chains can be quite long on a larger machine. Contention-sensitive real-time locking protocols have been proposed as a means of "breaking" transitive blocking chains, but such protocols tend to have high overhead due to more complicated lock/unlock logic. To ease such overhead, the usage of lock servers is considered herein. In particular, four specific lock-server paradigms are proposed and many nuances concerning their deployment are explored. Experiments are presented that show that, by executing cache hot, lock servers can enable reductions in lock/unlock overhead of up to 86%. Such reductions make contention-sensitive protocols a viable approach in practice.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8978Whole-System Worst-Case Energy-Consumption Analysis for Energy-Constrained Real-Time Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8979
Although internal devices (e.g., memory, timers) and external devices (e.g., transceivers, sensors) significantly contribute to the energy consumption of an embedded real-time system, their impact on the worst-case response energy consumption (WCRE) of tasks is usually not adequately taken into account. Most WCRE analysis techniques, for example, only focus on the processor and therefore do not consider the energy consumption of other hardware units. Apart from that, the typical approach for dealing with devices is to assume that all of them are always activated, which leads to high WCRE overestimations in the general case where a system switches off the devices that are currently not needed in order to minimize energy consumption.In this paper, we present SysWCEC, an approach that addresses these problems by enabling static WCRE analysis for entire real-time systems, including internal as well as external devices. For this purpose, SysWCEC introduces a novel abstraction, the power-state-transition graph, which contains information about the worst-case energy consumption of all possible execution paths. To construct the graph, SysWCEC decomposes the analyzed real-time system into blocks during which the set of active devices in the system does not change and is consequently able to precisely handle devices being dynamically activated or deactivated.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8979Recovery Time Considerations in Real-Time Systems Employing Software Fault Tolerance
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8980
Safety-critical real-time systems like modern automobiles with advanced driving-assist features must employ redundancy for crucial software tasks to tolerate permanent crash faults. This redundancy can be achieved by using techniques like active replication or the primary-backup approach. In such systems, the recovery time which is the amount of time it takes for a redundant task to take over execution on the failure of a primary task becomes a very important design parameter. The recovery time for a given task depends on various factors like task allocation, primary and redundant task priorities, system load and the scheduling policy. Each task can also have a different recovery time requirement (RTR). For example, in automobiles with automated driving features, safety-critical tasks like perception and steering control have strict RTRs, whereas such requirements are more relaxed in the case of tasks like heating control and mission planning. In this paper, we analyze the recovery time for software tasks in a real-time system employing Rate-Monotonic Scheduling (RMS). We derive bounds on the recovery times for different redundant task options and propose techniques to determine the redundant-task type for a task to satisfy its RTR. We also address the fault-tolerant task allocation problem, with the additional constraint of satisfying the RTR of each task in the system. Given that the problem of assigning tasks to processors is a well-known NP-hard bin-packing problem we propose computationally-efficient heuristics to find a feasible allocation of tasks and their redundant copies. We also apply the simulated annealing method to the fault-tolerant task allocation problem with RTR constraints and compare against our heuristics.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8980Vulnerability Analysis and Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8981
Much effort has been put into improving the predictability of real-time systems, especially in safety-critical environments, which provides designers with a rich set of methods and tools to attest safety in situations with no or a limited number of accidental faults. However, with increasing connectivity of real-time systems and a wide availability of increasingly sophisticated exploits, security and, in particular, the consequences of predictability on security become concerns of equal importance. Time-triggered scheduling with offline constructed tables provides determinism and simplifies timing inference, however, at the same time, time-triggered scheduling creates vulnerabilities by allowing attackers to target their attacks to specific, deterministically scheduled and possibly safety-critical tasks. In this paper, we analyze the severity of these vulnerabilities by assuming successful compromise of a subset of the tasks running in a real-time system and by investigating the attack potential that attackers gain from them. Moreover, we discuss two ways to mitigate direct attacks: slot-level online randomization of schedules, and offline schedule-diversification. We evaluate these mitigation strategies with a real-world case study to show their practicability for mitigating not only accidentally malicious behavior, but also malicious behavior triggered by attackers on purpose.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8981Instruction Caches in Static WCET Analysis of Artificially Diversified Software
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8982
Artificial Software Diversity is a well-established method to increase security of computer systems by thwarting code-reuse attacks, which is particularly beneficial in safety-critical real-time systems. However, static worst-case execution time (WCET) analysis on complex hardware involving caches only delivers sound results for single versions of the program, as it relies on absolute addresses for all instructions. To overcome this problem, we present an abstract interpretation based instruction cache analysis that provides a safe yet precise upper bound for the execution of all variants of a program. We achieve this by integrating uncertainties in the absolute and relative positioning of code fragments when updating the abstract cache state during the analysis. We demonstrate the effectiveness of our approach in an in-depth evaluation and provide an overview of the impact of different diversity techniques on the WCET estimations.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8982Protecting Real-Time GPU Kernels on Integrated CPU-GPU SoC Platforms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8983
Integrated CPU-GPU architecture provides excellent acceleration capabilities for data parallel applications on embedded platforms while meeting the size, weight and power (SWaP) requirements. However, sharing of main memory between CPU applications and GPU kernels can severely affect the execution of GPU kernels and diminish the performance gain provided by GPU. For example, in the NVIDIA Jetson TX2 platform, an integrated CPU-GPU architecture, we observed that, in the worst case, the GPU kernels can suffer as much as 3X slowdown in the presence of co-running memory intensive CPU applications. In this paper, we propose a software mechanism, which we call BWLOCK++, to protect the performance of GPU kernels from co-scheduled memory intensive CPU applications.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8983Avoiding Pitfalls when Using NVIDIA GPUs for Real-Time Tasks in Autonomous Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8984
NVIDIA's CUDA API has enabled GPUs to be used as computing accelerators across a wide range of applications. This has resulted in performance gains in many application domains, but the underlying GPU hardware and software are subject to many non-obvious pitfalls. This is particularly problematic for safety-critical systems, where worst-case behaviors must be taken into account. While such behaviors were not a key concern for earlier CUDA users, the usage of GPUs in autonomous vehicles has taken CUDA programs out of the sole domain of computer-vision and machine-learning experts and into safety-critical processing pipelines. Certification is necessary in this new domain, which is problematic because GPU software may have been developed without any regard for worst-case behaviors. Pitfalls when using CUDA in real-time autonomous systems can result from the lack of specifics in official documentation, and developers of GPU software not being aware of the implications of their design choices with regards to real-time requirements. This paper focuses on the particular challenges facing the real-time community when utilizing CUDA-enabled GPUs for autonomous applications, and best practices for applying real-time safety-critical principles.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8984Early Design Phase Cross-Platform Throughput Prediction for Industrial Stream-Processing Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8985
Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8985Camera Networks Dimensioning and Scheduling with Quasi Worst-Case Transmission Time
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8986
This paper describes a method to compute frame size estimates to be used in quasi Worst-Case Transmission Times (qWCTT) for cameras that transmit frames over IP-based communication networks. The precise determination of qWCTT allows us to model the network access scheduling problem as a multiframe problem and to re-use theoretical results for network scheduling. The paper presents a set of experiments, conducted in an industrial testbed, that validate the qWCTT estimation. We believe that a more precise estimation will lead to savings for network infrastructure and to better network utilization.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8986Verifying Weakly-Hard Real-Time Properties of Traffic Streams in Switched Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8987
In this paper, we introduce the first verification method which is able to provide weakly-hard real-time guarantees for tasks and task chains in systems with multiple resources under partitioned scheduling with fixed priorities. Existing weakly-hard real-time verification techniques are restricted today to systems with a single resource. A weakly-hard real-time guarantee specifies an upper bound on the maximum number m of deadline misses of a task in a sequence of k consecutive executions. Such a guarantee is useful if a task can experience a bounded number of deadline misses without impacting the system mission. We present our verification method in the context of switched networks with traffic streams between nodes, and demonstrate its practical applicability in an automotive case study.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8987Quantifying the Resiliency of Fail-Operational Real-Time Networked Control Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8988
In time-sensitive, safety-critical systems that must be fail-operational, active replication is commonly used to mitigate transient faults that arise due to electromagnetic interference (EMI). However, designing an effective and well-performing active replication scheme is challenging since replication conflicts with the size, weight, power, and cost constraints of embedded applications. To enable a systematic and rigorous exploration of the resulting tradeoffs, we present an analysis to quantify the resiliency of fail-operational networked control systems against EMI-induced memory corruption, host crashes, and retransmission delays. Since control systems are typically robust to a few failed iterations, e.g., one missed actuation does not crash an inverted pendulum, traditional solutions based on hard real-time assumptions are often too pessimistic. Our analysis reduces this pessimism by modeling a control system's inherent robustness as an (m,k)-firm specification. A case study with an active suspension workload indicates that the analytical bounds closely predict the failure rate estimates obtained through simulation, thereby enabling a meaningful design-space exploration, and also demonstrates the utility of the analysis in identifying non-trivial and non-obvious reliability tradeoffs.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8988AdaptMC: A Control-Theoretic Approach for Achieving Resilience in Mixed-Criticality Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8989
A system is said to be resilient if slight deviations from expected behavior during run-time does not lead to catastrophic degradation of performance: minor deviations should result in no more than minor performance degradation. In mixed-criticality systems, such degradation should additionally be criticality-cognizant. The applicability of control theory is explored for the design of resilient run-time scheduling algorithms for mixed-criticality systems. Recent results in control theory have shown how appropriately designed controllers can provide guaranteed service to hard-real-time servers; this prior work is extended to allow for such guarantees to be made concurrently to multiple criticality-cognizant servers. The applicability of this approach is explored via several experimental simulations in a dual-criticality setting. These experiments demonstrate that our control-based run-time schedulers can be synthesized in such a manner that bounded deviations from expected behavior result in the high-criticality server suffering no performance degradation and the lower-criticality one, bounded performance degradation.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8989Virtual Timing Isolation for Mixed-Criticality Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8990
Commercial of the shelf multicore processors suffer from timing interferences between cores which complicates applying them in hard real-time systems like avionic applications. This paper proposes a virtual timing isolation of one main application running on one core from all other cores. The proposed technique is based on hardware external to the multicore processor and completely transparent to the main application i.e., no modifications of the software including the operating system are necessary. The basic idea is to apply a single-core execution based Worst Case Execution Time analysis and to accept a predefined slowdown during multicore execution. If the slowdown exceeds the acceptable bounds, interferences will be reduced by controlling the behavior of low-critical cores to keep the main application's progress inside the given bounds. Apart from the main goal of isolating the timing of the critical application a subgoal is also to efficiently use the other cores. For that purpose, three different mechanisms for controlling the non-critical cores are compared regarding efficient usage of the complete processor.Measuring the progress of the main application is performed by tracking the application's Fingerprint. This technology quantifies online any slowdown of execution compared to a given baseline (single-core execution). Several countermeasures to compensate unacceptable slowdowns are proposed and evaluated in this paper, together with an accuracy evaluation of the Fingerprinting. Our evaluations using the TACLeBench benchmark suite show that we can meet a given acceptable timing bound of 4 percent slowdown with a resulting real slowdown of only 3.27 percent in case of a pulse width modulated control and of 4.44 percent in the case of a frequency scaling control.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8990Improving the Schedulability and Quality of Service for Federated Scheduling of Parallel Mixed-Criticality Tasks on Multiprocessors
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8991
This paper presents federated scheduling algorithm, called MCFQ, for a set of parallel mixed-criticality tasks on multiprocessors. The main feature of MCFQ algorithm is that different alternatives to assign each high-utilization, high-critical task to the processors are computed. Given the different alternatives, we carefully select one alternative for each such task so that all the other tasks can be successfully assigned on the remaining processors. Such flexibility in choosing the right alternative has two benefits. First, it has higher likelihood to satisfy the total resource requirement of all the tasks while ensuring schedulability. Second, computational slack becomes available by intelligently selecting the alternative such that the total resource requirement of all the tasks is minimized. Such slack then can be used to improve the QoS of the system (i.e., never discard some low-critical tasks). Our experimental results using randomly-generated parallel mixed-critical tasksets show that MCFQ can schedule much higher number of tasksets and can improve the QoS of the system significantly in comparison to the state of the art.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8991Intractability Issues in Mixed-Criticality Scheduling
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8992
In seeking to develop mixed-criticality scheduling algorithms, one encounters challenges arising from two sources. First, mixed-criticality scheduling is an inherently an on-line problem in that scheduling decisions must be made without access to all the information that is needed to make such decisions optimally - such information is only revealed over time. Second, many fundamental mixed-criticality schedulability analysis problems are computationally intractable - NP-hard in the strong sense - but we desire to solve these problems using algorithms with polynomial or pseudo-polynomial running time. While these two aspects of intractability are traditionally studied separately in the theoretical computer science literature, they have been considered in an integrated fashion in mixed-criticality scheduling theory. In this work we seek to separate out the effects of being inherently on-line, and being computationally intractable, on the overall intractability of mixed-criticality scheduling problems. Speedup factor is widely used as quantitative metric of the effectiveness of mixed-criticality scheduling algorithms; there has recently been a bit of a debate regarding the appropriateness of doing so. We provide here some additional perspective on this matter: we seek to better understand its appropriateness as well as its limitations in this regard by examining separately how the on-line nature of some mixed-criticality problems, and their computational complexity, contribute to the speedup factors of two widely-studied mixed-criticality scheduling algorithms.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8992Beyond the Weakly Hard Model: Measuring the Performance Cost of Deadline Misses
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8993
Most works in schedulability analysis theory are based on the assumption that constraints on the performance of the application can be expressed by a very limited set of timing constraints (often simply hard deadlines) on a task model. This model is insufficient to represent a large number of systems in which deadlines can be missed, or in which late task responses affect the performance, but not the correctness of the application. For systems with a possible temporary overload, models like the m-K deadline have been proposed in the past. However, the m-K model has several limitations since it does not consider the state of the system and is largely unaware of the way in which the performance is affected by deadline misses (except for critical failures). In this paper, we present a state-based representation of the evolution of a system with respect to each deadline hit or miss event. Our representation is much more general (while hopefully concise enough) to represent the evolution in time of the performance of time-sensitive systems with possible time overloads. We provide the theoretical foundations for our model and also show an application to a simple system to give examples of the state representations and their use.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8993A Response-Time Analysis for Non-Preemptive Job Sets under Global Scheduling
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8994
An effective way to increase the timing predictability of multicore platforms is to use non-preemptive scheduling. It reduces preemption and job migration overheads, avoids intra-core cache interference, and improves the accuracy of worst-case execution time (WCET) estimates. However, existing schedulability tests for global non-preemptive multiprocessor scheduling are pessimistic, especially when applied to periodic workloads. This paper reduces this pessimism by introducing a new type of sufficient schedulability analysis that is based on an exploration of the space of possible schedules using concise abstractions and state-pruning techniques. Specifically, we analyze the schedulability of non-preemptive job sets (with bounded release jitter and execution time variation) scheduled by a global job-level fixed-priority (JLFP) scheduling algorithm upon an identical multicore platform. The analysis yields a lower bound on the best-case response-time (BCRT) and an upper bound on the worst-case response time (WCRT) of the jobs. In an empirical evaluation with randomly generated workloads, we show that the method scales to 30 tasks, a hundred thousand jobs (per hyperperiod), and up to 9 cores.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8994Transferring Real-Time Systems Research into Industrial Practice: Four Impact Case Studies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8995
This paper describes four impact case studies where real-time systems research has been successfully transferred into industrial practice. In three cases, the technology created was translated into a viable commercial product via a start-up company. This technology transfer led to the creation and sustaining of a large number of high technology jobs over a 20 year period. The final case study involved the direct transfer of research results into an engineering company. Taken together, all four case studies have led to significant advances in automotive electronics and avionics, providing substantial returns on investment for the companies using the technology.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8995Push Forward: Global Fixed-Priority Scheduling of Arbitrary-Deadline Sporadic Task Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8996
The sporadic task model is often used to analyze recurrent execution of tasks in real-time systems. A sporadic task defines an infinite sequence of task instances, also called jobs, that arrive under the minimum inter-arrival time constraint. To ensure the system safety, timeliness has to be guaranteed in addition to functional correctness, i.e., all jobs of all tasks have to be finished before the job deadlines. We focus on analyzing arbitrary-deadline task sets on a homogeneous (identical) multiprocessor system under any given global fixed-priority scheduling approach and provide a series of schedulability tests with different tradeoffs between their time complexity and their accuracy. Under the arbitrary-deadline setting, the relative deadline of a task can be longer than the minimum inter-arrival time of the jobs of the task. We show that global deadline-monotonic (DM) scheduling has a speedup bound of 3-1/M against any optimal scheduling algorithms, where M is the number of identical processors, and prove that this bound is asymptotically tight.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8996Efficiently Approximating the Probability of Deadline Misses in Real-Time Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8997
This paper explores the probability of deadline misses for a set of constrained-deadline sporadic soft real-time tasks on uniprocessor platforms. We explore two directions to evaluate the probability whether a job of the task under analysis can finish its execution at (or before) a testing time point t. One approach is based on analytical upper bounds that can be efficiently computed in polynomial time at the price of precision loss for each testing point, derived from the well-known Hoeffding's inequality and the well-known Bernstein's inequality. Another approach convolutes the probability efficiently over multinomial distributions, exploiting a series of state space reduction techniques, i.e., pruning without any loss of precision, and approximations via unifying equivalent classes with a bounded loss of precision. We demonstrate the effectiveness of our approaches in a series of evaluations. Distinct from the convolution-based methods in the literature, which suffer from the high computation demand and are applicable only to task sets with a few tasks, our approaches can scale reasonably without losing much precision in terms of the derived probability of deadline misses.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8997Compiler-based Extraction of Event Arrival Functions for Real-Time Systems Analysis
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8998
Event arrival functions are commonly required in real-time systems analysis. Yet, event arrival functions are often either modeled based on specifications or generated by using potentially unsafe captured traces. To overcome this shortcoming, we present a compiler-based approach to safely extract event arrival functions. The extraction takes place at the code-level considering a complete coverage of all possible paths in the program and resulting in a cycle accurate event arrival curve. In order to reduce the runtime overhead of the proposed algorithm, we extend our approach with an adjustable level of granularity always providing a safe approximation of the tightest possible event arrival curve. In an evaluation, we demonstrate that the required extraction time can be heavily reduced while maintaining a high precision.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8998A Measurement-Based Model for Parallel Real-Time Tasks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8999
Under the federated paradigm of multiprocessor scheduling, a set of processors is reserved for the exclusive use of each real-time task. If tasks are characterized very conservatively (as is typical in safety-critical systems), it is likely that most invocations of the task will have computational demand far below the worst-case characterization, and could have been scheduled correctly upon far fewer processors than were assigned to it assuming the worst-case characterization of its run-time behavior. Provided we could safely determine during run-time when all the processors are going to be needed, for the rest of the time the unneeded processors could be idled in low-energy "sleep" mode, or used for executing non-real time work in the background. In this paper we propose a model for representing parallelizable real-time tasks in a manner that permits us to do so. Our model does not require us to have fine-grained knowledge of the internal structure of the code represented by the task; rather, it characterizes each task by a few parameters that are obtained by repeatedly executing the code under different conditions and measuring the run-times.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8999HWP: Hardware Support to Reconcile Cache Energy, Complexity, Performance and WCET Estimates in Multicore Real-Time Systems
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9000
High-performance processors have deployed multilevel cache (MLC) systems for decades. In the embedded real-time market, the use of MLC is also on the rise, with processors for future systems in space, railway, avionics and automotive already featuring two or more cache levels. One of the most critical elements for MLC is the write policy that not only affects several key metrics such as performance, WCET estimates, energy/power, and reliability, but also the design of complexity-prone cache coherence protocol and cache reliability solutions. In this paper we make an extensive analysis of existing write policies, namely write-through (WT) and write-back (WB). In the context of the real-time domain, we show that no write policy is superior for all metrics: WT simplifies the design of the coherence and reliability solutions at the cost of performance, WCET, and energy; while WB improves performance and energy results, but complicates cache design. To take the best of each policy, we propose Hybrid Write Policy (HWP) a low-complexity hardware mechanism that reconciles the benefits of WT in terms of simplifying the cache design (e.g. coherence solution) and the benefits of WB in improved average performance and WCET estimates as the pressure on the interconnection network increases. Guaranteed performance results show that HWP scales with core count similar to WB. Likewise, HWP reduces cache energy usage of WT, to levels similar to those of WB. These benefits are obtained while retaining the reduced coherence complexity of WT, in contrast to high coherence costs under WB.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9000Deterministic Memory Abstraction and Supporting Multicore System Architecture
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9001
Poor time predictability of multicore processors has been a long-standing challenge in the real-time systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform-the OS and hardware-guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9001Worst-case Stall Analysis for Multicore Architectures with Two Memory Controllers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9002
In multicore architectures, there is potential for contention between cores when accessing shared resources, such as system memory. Such contention scenarios are challenging to accurately analyse, from a worst-case timing perspective. One way of making memory contention in multicores more amenable to timing analysis is the use of memory regulation mechanisms. It restricts the number of accesses performed by any given core over time by using periodically replenished per-core budgets. Typically, this assumes that all cores access memory via a single shared memory controller. However, ever-increasing bandwidth requirements have brought about architectures with multiple memory controllers. These control accesses to different memory regions and are potentially shared among all cores. While this presents an opportunity to satisfy bandwidth requirements, existing analysis designed for a single memory controller are no longer safe. This work formulates a worst-case memory stall analysis for a memory-regulated multicore with two memory controllers. This stall analysis can be integrated into the schedulability analysis of systems under fixed-priority partitioned scheduling. Five heuristics for assigning tasks and memory budgets to cores in a stall-cognisant manner are also proposed. We experimentally quantify the cost in terms of extra stall for letting all cores benefit from the memory space offered by both controllers, and also evaluate the five heuristics for different system characteristics.Wed, 20 Jun 2018 16:34:22 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=9002Front Matter - ECRTS 2018 Artifacts, Table of Contents, Preface, Artifact Evaluation Committee
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8968
Front Matter - ECRTS 2018 Artifacts, Table of Contents, Preface, Artifact Evaluation CommitteeWed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8968AdaptMC: A Control-Theoretic Approach for Achieving Resilience in Mixed-Criticality Systems (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8969
A system is said to be resilient if slight deviations from expected behavior during run-time does not lead to catastrophic degradation of performance: minor deviations should result in no more than minor performance degradation. In mixed-criticality systems, such degradation should additionally be criticality-cognizant. The applicability of control theory is explored for the design of resilient run-time scheduling algorithms for mixed-criticality systems. Recent results in control theory have shown how appropriately designed controllers can provide guaranteed service to hard-real-time servers; this prior work is extended to allow for such guarantees to be made concurrently to multiple criticality-cognizant servers. The applicability of this approach is explored via several experimental simulations in a dual-criticality setting. These experiments demonstrate that our control-based run-time schedulers can be synthesized in such a manner that bounded deviations from expected behavior result in the high-criticality server suffering no performance degradation and the lower-criticality one, bounded performance degradation.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8969Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8970
During the past decade, parallelism-related issues have been at the forefront of real-time systems research due to the advent of multicore technologies. In the coming years, such issues will loom ever larger due to increasing core counts. Having more cores means a greater potential exists for platform capacity loss when the available parallelism cannot be fully exploited. In this work, such capacity loss is considered in the context of real-time locking protocols. In this context, lock nesting becomes a key concern as it can result in transitive blocking chains that force tasks to execute sequentially unnecessarily. Such chains can be quite long on a larger machine. Contention-sensitive real-time locking protocols have been proposed as a means of ``breaking'' transitive blocking chains, but such protocols tend to have high overhead due to more complicated lock/unlock logic. To ease such overhead, the usage of lock servers is considered herein. In particular, four specific lock-server paradigms are proposed and many nuances concerning their deployment are explored. Experiments are presented that show that, by executing cache hot, lock servers can enable reductions in lock/unlock overhead of up to 86\%. Such reductions make contention-sensitive protocols a viable approach in practice. This artifact contains the implementation of two contention-sensitive locking protocol variants implemented with four proposed lock-server paradigms, as well as the experiments with which they were evaluated.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8970Protecting Real-Time GPU Kernels on Integrated CPU-GPU SoC Platforms (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8971
This artifact is based on BWLOCK++, a software framework to protect the performance of GPU kernels from co-scheduled memory intensive CPU applications in platforms containing integrated GPUs. The artifact is designed to support the claims of the companion paper and contains instructions on how to build and execute BWLOCK++ on a target hardware platform.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8971Beyond the Weakly Hard Model: Measuring the Performance Cost of Deadline Misses (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8972
This document provides a brief description of the artifact material related to the paper "Beyond the Weakly Hard Model: Measuring the Performance Cost of Deadline Misses". The code provided in the artifact implements the algorithms presented in the paper and all the experimental tests.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8972Worst-case Stall Analysis for Multicore Architectures with Two Memory Controllers (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8973
This artifact demonstrates the performance of the proposed worst-case memory stall analysis for a memory-regulated multicore with two memory controllers. The memory stall analysis is implemented in Java along with five different stall-cognisant bandwidth-to-core and task-to-core assignment heuristics. It evaluates the performance of these heuristics in terms of schedulability via experiments with synthetic task sets capturing different system characteristics. It also quantifies the cost in terms of extra stall for letting all cores benefit from the memory space offered by both controllers on the given multicore platform.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8973Evaluations of Push Forward: Global Fixed-Priority Scheduling of Arbitrary-Deadline Sporadic Task Systems (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8974
This artifact provides the experimental details and implementations of all the facilitated schedulability tests used in the reported acceptance ratio based evaluations as documented in the related paper "Push Forward: Global Fixed-Priority Scheduling of Arbitrary-Deadline Sporadic Task Systems". Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8974Whole-System WCEC Analysis for Energy-Constrained Real-Time Systems (Artifact)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8975
Although internal devices (e.g., memory, timers) and external devices (e.g., sensors, transceivers) significantly contribute to the energy consumption of an embedded real-time system, their impact on the worst-case response energy consumption (WCRE) of tasks is usually not adequately taken into account.Most WCRE analysis techniques only focus on the processor and neglect the energy consumption of other hardware units that are temporarily activated and deactivated in the system.To solve the problem of system-wide energy-consumption analysis, we present SysWCEC, an approach that addresses these problems by enabling static WCRE analysis for entire real-time systems, including internal as well as external devices.For this purpose, SysWCEC introduces a novel abstraction, the power-state--transition graph, which contains information about the worst-case energy consumption of all possible execution paths.To construct the graph, SysWCEC decomposes the analyzed real-time system into blocks during which the set of active devices in the system does not change and is consequently able to precisely handle devices being dynamically activated or deactivated.In this artifact evaluation, which accompanies our related conference paper, we present easy to reproduce WCRE analyses with the SysWCEC framework using several benchmarks.The artifact comprises the generation of the power-state--transition graph from a given benchmark system and the formulation of an integer linear program whose solution eventually yields safe WCRE bounds.Wed, 20 Jun 2018 16:15:45 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8975Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8935
Gianlorenzo Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8935Network Flow-Based Refinement for Multilevel Hypergraph Partitioning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8936
We present a refinement framework for multilevel hypergraph partitioning that uses max-flow computations on pairs of blocks to improve the solution quality of a k-way partition. The framework generalizes the flow-based improvement algorithm of KaFFPa from graphs to hypergraphs and is integrated into the hypergraph partitioner KaHyPar. By reducing the size of hypergraph flow networks, improving the flow model used in KaFFPa, and developing techniques to improve the running time of our algorithm, we obtain a partitioner that computes the best solutions for a wide range of benchmark hypergraphs from different application areas while still having a running time comparable to that of hMetis.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8936Aggregative Coarsening for Multilevel Hypergraph Partitioning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8937
Algorithms for many hypergraph problems, including partitioning, utilize multilevel frameworks to achieve a good trade-off between the performance and the quality of results. In this paper we introduce two novel aggregative coarsening schemes and incorporate them within state-of-the-art hypergraph partitioner Zoltan. Our coarsening schemes are inspired by the algebraic multigrid and stable matching approaches. We demonstrate the effectiveness of the developed schemes as a part of multilevel hypergraph partitioning framework on a wide range of problems.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8937Memetic Graph Clustering
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8938
It is common knowledge that there is no single best strategy for graph clustering, which justifies a plethora of existing approaches. In this paper, we present a general memetic algorithm, VieClus, to tackle the graph clustering problem. This algorithm can be adapted to optimize different objective functions. A key component of our contribution are natural recombine operators that employ ensemble clusterings as well as multi-level techniques. Lastly, we combine these techniques with a scalable communication protocol, producing a system that is able to compute high-quality solutions in a short amount of time. We instantiate our scheme with local search for modularity and show that our algorithm successfully improves or reproduces all entries of the 10th DIMACS implementation challenge under consideration using a small amount of time.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8938ILP-based Local Search for Graph Partitioning
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8939
Computing high-quality graph partitions is a challenging problem with numerous applications. In this paper, we present a novel meta-heuristic for the balanced graph partitioning problem. Our approach is based on integer linear programs that solve the partitioning problem to optimality. However, since those programs typically do not scale to large inputs, we adapt them to heuristically improve a given partition. We do so by defining a much smaller model that allows us to use symmetry breaking and other techniques that make the approach scalable. For example, in Walshaw's well-known benchmark tables we are able to improve roughly half of all entries when the number of blocks is high.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8939Decision Diagrams for Solving a Job Scheduling Problem Under Precedence Constraints
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8940
We consider a job scheduling problem under precedence constraints, a classical problem for a single processor and multiple jobs to be done. The goal is, given processing time of n fixed jobs and precedence constraints over jobs, to find a permutation of n jobs that minimizes the total flow time, i.e., the sum of total wait time and processing times of all jobs, while satisfying the precedence constraints. The problem is an integer program and is NP-hard in general. We propose a decision diagram pi-MDD, for solving the scheduling problem exactly. Our diagram is suitable for solving linear optimization over permutations with precedence constraints. We show the effectiveness of our approach on the experiments on large scale artificial scheduling problems.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8940Speeding up Dualization in the Fredman-Khachiyan Algorithm B
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8941
The problem of computing the dual of a monotone Boolean function f is a fundamental problem in theoretical computer science with numerous applications. The related problem of duality testing (given two monotone Boolean functions f and g, declare that they are dual or provide a certificate that shows they are not) has a complexity that is not yet known. However, two quasi-polynomial time algorithms for it, often referred to as FK-A and FK-B, were proposed by Fredman and Khachiyan in 1996, with the latter having a better complexity guarantee. These can be naturally used as a subroutine in computing the dual of f.In this paper, we investigate this use of the FK-B algorithm for the computation of the dual of a monotone Boolean function, and present practical improvements to its performance. First, we show how FK-B can be modified to produce multiple certificates (Boolean vectors on which the functions defined by the original f and the current dual g do not provide outputs consistent with duality). Second, we show how the number of redundancy tests - one of the more costly and time-consuming steps of FK-B - can be substantially reduced in this context. Lastly, we describe a simple memoization technique that avoids the solution of multiple identical subproblems.We test our approach on a number of inputs coming from computational biology as well as combinatorics. These modifications provide a substantial speed-up, as much as an order of magnitude, for FK-B dualization relative to a naive implementation. Although other methods may end up being faster in practice, our work paves the way for a principled optimization process for the generation of monotone Boolean functions and their duals from an oracle.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8941An Ambiguous Coding Scheme for Selective Encryption of High Entropy Volumes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8942
This study concentrates on the security of high-entropy volumes, where entropy-encoded multimedia files or compressed text sequences are the most typical sources. We consider a system in which the cost of encryption is hefty in terms of some metric (e.g., time, memory, energy, or bandwidth), and thus, creates a bottleneck. With the aim of reducing the encryption cost on such a system, we propose a data coding scheme to achieve the data security by encrypting significantly less data than the original size without sacrifice in secrecy. The main idea of the proposed technique is to represent the input sequence by not uniquely-decodable codewords. The proposed coding scheme splits a given input into two partitions as the payload, which consists of the ambiguous codeword sequence, and the disambiguation information, which is the necessary knowledge to properly decode the payload. Under the assumed condition that the input data is the output of an entropy-encoder, and thus, on ideal case independently and identically distributed, the payload occupies ~~ (d-2)/d, and the disambiguation information takes ~~ 2/d of the encoded stream, where d>2 denotes a chosen parameter typically between 6 to 20. We propose to encrypt the payload and keep the disambiguation information in plain to reduce the amount of data to be encrypted, where recursive representation of the payload with the proposed coding can decrease the to-be-encrypted volume further. When 2 * 2^d <= n <= tau * d * 2^d, for tau = (d-1.44)/2, we show that the contraction of the possible message space 2^n due to the public disambiguation information is accommodated by keeping the codeword set secret. We discuss possible applications of the proposed scheme in practice.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8942A 3/2-Approximation Algorithm for the Student-Project Allocation Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8943
The Student-Project Allocation problem with lecturer preferences over Students (SPA-S) comprises three sets of agents, namely students, projects and lecturers, where students have preferences over projects and lecturers have preferences over students. In this scenario we seek a stable matching, that is, an assignment of students to projects such that there is no student and lecturer who have an incentive to deviate from their assignee/s. We study SPA-ST, the extension of SPA-S in which the preference lists of students and lecturers need not be strictly ordered, and may contain ties. In this scenario, stable matchings may be of different sizes, and it is known that MAX SPA-ST, the problem of finding a maximum stable matching in SPA-ST, is NP-hard. We present a linear-time 3/2-approximation algorithm for MAX SPA-ST and an Integer Programming (IP) model to solve MAX SPA-ST optimally. We compare the approximation algorithm with the IP model experimentally using randomly-generated data. We find that the performance of the approximation algorithm easily surpassed the 3/2 bound, constructing a stable matching within 92% of optimal in all cases, with the percentage being far higher for many instances.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8943How Good Are Popular Matchings?
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8944
In this paper, we consider the Hospital Residents problem (HR) and the Hospital Residents problem with Lower Quotas (HRLQ). In this model with two sided preferences, stability is a well accepted notion of optimality. However, in the presence of lower quotas, a stable and feasible matching need not exist. For the HRLQ problem, our goal therefore is to output a good feasible matching assuming that a feasible matching exists. Computing matchings with minimum number of blocking pairs (Min-BP) and minimum number of blocking residents (Min-BR) are known to be NP-Complete. The only approximation algorithms for these problems work under severe restrictions on the preference lists. We present an algorithm which circumvents this restriction and computes a popular matching in the HRLQ instance. We show that on data-sets generated using various generators, our algorithm performs very well in terms of blocking pairs and blocking residents. Yokoi [Yokoi, 2017] recently studied envy-free matchings for the HRLQ problem. We propose a simple modification to Yokoi's algorithm to output a maximal envy-free matching. We observe that popular matchings outperform envy-free matchings on several parameters of practical importance, like size, number of blocking pairs, number of blocking residents.In the absence of lower quotas, that is, in the Hospital Residents (HR) problem, stable matchings are guaranteed to exist. Even in this case, we show that popularity is a practical alternative to stability. For instance, on synthetic data-sets generated using a particular model, as well as on real world data-sets, a popular matching is on an average 8-10% larger in size, matches more number of residents to their top-choice, and more residents prefer the popular matching as compared to a stable matching. Our comprehensive study reveals the practical appeal of popular matchings for the HR and HRLQ problems. To the best of our knowledge, this is the first study on the empirical evaluation of popular matchings in this setting.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8944Evaluating and Tuning n-fold Integer Programming
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8945
In recent years, algorithmic breakthroughs in stringology, computational social choice, scheduling, etc., were achieved by applying the theory of so-called n-fold integer programming. An n-fold integer program (IP) has a highly uniform block structured constraint matrix. Hemmecke, Onn, and Romanchuk [Math. Programming, 2013] showed an algorithm with runtime a^{O(rst + r^2s)} n^3, where a is the largest coefficient, r,s, and t are dimensions of blocks of the constraint matrix and n is the total dimension of the IP; thus, an algorithm efficient if the blocks are of small size and with small coefficients. The algorithm works by iteratively improving a feasible solution with augmenting steps, and n-fold IPs have the special property that augmenting steps are guaranteed to exist in a not-too-large neighborhood. However, this algorithm has never been implemented and evaluated.We have implemented the algorithm and learned the following along the way. The original algorithm is practically unusable, but we discover a series of improvements which make its evaluation possible. Crucially, we observe that a certain constant in the algorithm can be treated as a tuning parameter, which yields an efficient heuristic (essentially searching in a smaller-than-guaranteed neighborhood). Furthermore, the algorithm uses an overly expensive strategy to find a "best" step, while finding only an "approximatelly best" step is much cheaper, yet sufficient for quick convergence. Using this insight, we improve the asymptotic dependence on n from n^3 to n^2 log n which yields the currently asymptotically fastest algorithm for n-fold IP.Finally, we tested the behavior of the algorithm with various values of the tuning parameter and different strategies of finding improving steps. First, we show that decreasing the tuning parameter initially leads to an increased number of iterations needed for convergence and eventually to getting stuck in local optima, as expected. However, surprisingly small values of the parameter already exhibit good behavior. Second, our new strategy for finding "approximatelly best" steps wildly outperforms the original construction.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8945A Computational Investigation on the Strength of Dantzig-Wolfe Reformulations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8946
In Dantzig-Wolfe reformulation of an integer program one convexifies a subset of the constraints, leading to potentially stronger dual bounds from the respective linear programming relaxation. As the subset can be chosen arbitrarily, this includes the trivial cases of convexifying no and all constraints, resulting in a weakest and strongest reformulation, respectively. Our computational study aims at better understanding of what happens in between these extremes. For a collection of integer programs with few constraints we compute, optimally solve, and evaluate the relaxations of all possible (exponentially many) Dantzig-Wolfe reformulations (with mild extensions to larger models from the MIPLIBs). We observe that only a tiny number of different dual bounds actually occur and that only a few inclusion-wise minimal representatives exist for each. This aligns with considerably different impacts of individual constraints on the strengthening the relaxation, some of which have almost no influence. In contrast, types of constraints that are convexified in textbook reformulations have a larger effect. We relate our experiments to what could be called a hierarchy of Dantzig-Wolfe reformulations.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8946Experimental Evaluation of Parameterized Algorithms for Feedback Vertex Set
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8947
Feedback Vertex Set is a classic combinatorial optimization problem that asks for a minimum set of vertices in a given graph whose deletion makes the graph acyclic. From the point of view of parameterized algorithms and fixed-parameter tractability, Feedback Vertex Set is one of the landmark problems: a long line of study resulted in multiple algorithmic approaches and deep understanding of the combinatorics of the problem. Because of its central role in parameterized complexity, the first edition of the Parameterized Algorithms and Computational Experiments Challenge (PACE) in 2016 featured Feedback Vertex Set as the problem of choice in one of its tracks. The results of PACE 2016 on one hand showed large discrepancy between performance of different classic approaches to the problem, and on the other hand indicated a new approach based on half-integral relaxations of the problem as probably the most efficient approach to the problem. In this paper we provide an exhaustive experimental evaluation of fixed-parameter and branching algorithms for Feedback Vertex Set.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8947An Efficient Local Search for the Minimum Independent Dominating Set Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8948
In the present paper, we propose an efficient local search for the minimum independent dominating set problem. We consider a local search that uses k-swap as the neighborhood operation. Given a feasible solution S, it is the operation of obtaining another feasible solution by dropping exactly k vertices from S and then by adding any number of vertices to it. We show that, when k=2, (resp., k=3 and a given solution is minimal with respect to 2-swap), we can find an improved solution in the neighborhood or conclude that no such solution exists in O(n Delta) (resp., O(n Delta^3)) time, where n denotes the number of vertices and Delta denotes the maximum degree. We develop a metaheuristic algorithm that repeats the proposed local search and the plateau search iteratively, where the plateau search examines solutions of the same size as the current solution that are obtainable by exchanging a solution vertex and a non-solution vertex. The algorithm is so effective that, among 80 DIMACS graphs, it updates the best-known solution size for five graphs and performs as well as existing methods for the remaining graphs.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8948Empirical Evaluation of Approximation Algorithms for Generalized Graph Coloring and Uniform Quasi-Wideness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8949
The notions of bounded expansion and nowhere denseness not only offer robust and general definitions of uniform sparseness of graphs, they also describe the tractability boundary for several important algorithmic questions. In this paper we study two structural properties of these graph classes that are of particular importance in this context, namely the property of having bounded generalized coloring numbers and the property of being uniformly quasi-wide. We provide experimental evaluations of several algorithms that approximate these parameters on real-world graphs. On the theoretical side, we provide a new algorithm for uniform quasi-wideness with polynomial size guarantees in graph classes of bounded expansion and show a lower bound indicating that the guarantees of this algorithm are close to optimal in graph classes with fixed excluded minor.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8949Multi-Level Steiner Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8950
In the classical Steiner tree problem, one is given an undirected, connected graph G=(V,E) with non-negative edge costs and a set of terminals T subseteq V. The objective is to find a minimum-cost edge set E' subseteq E that spans the terminals. The problem is APX-hard; the best known approximation algorithm has a ratio of rho = ln(4)+epsilon < 1.39. In this paper, we study a natural generalization, the multi-level Steiner tree (MLST) problem: given a nested sequence of terminals T_1 subset ... subset T_k subseteq V, compute nested edge sets E_1 subseteq ... subseteq E_k subseteq E that span the corresponding terminal sets with minimum total cost.The MLST problem and variants thereof have been studied under names such as Quality-of-Service Multicast tree, Grade-of-Service Steiner tree, and Multi-Tier tree. Several approximation results are known. We first present two natural heuristics with approximation factor O(k). Based on these, we introduce a composite algorithm that requires 2^k Steiner tree computations. We determine its approximation ratio by solving a linear program. We then present a method that guarantees the same approximation ratio and needs at most 2k Steiner tree computations. We compare five algorithms experimentally on several classes of graphs using four types of graph generators. We also implemented an integer linear program for MLST to provide ground truth. Our combined algorithm outperforms the others both in theory and in practice when the number of levels is small (k <= 22), which works well for applications such as designing multi-level infrastructure or network visualization.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8950Dictionary Matching in Elastic-Degenerate Texts with Applications in Searching VCF Files On-line
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8951
An elastic-degenerate string is a sequence of n sets of strings of total length N. It has been introduced to represent multiple sequence alignments of closely-related sequences in a compact form. For a standard pattern of length m, pattern matching in an elastic-degenerate text can be solved on-line in time O(nm^2+N) with pre-processing time and space O(m) (Grossi et al., CPM 2017). A fast bit-vector algorithm requiring time O(N * ceil[m/w]) with pre-processing time and space O(m * ceil[m/w]), where w is the size of the computer word, was also presented. In this paper we consider the same problem for a set of patterns of total length M. A straightforward generalization of the existing bit-vector algorithm would require time O(N * ceil[M/w]) with pre-processing time and space O(M * ceil[M/w]), which is prohibitive in practice. We present a new on-line O(N * ceil[M/w])-time algorithm with pre-processing time and space O(M). We present experimental results using both synthetic and real data demonstrating the performance of the algorithm. We further demonstrate a real application of our algorithm in a pipeline for discovery and verification of minimal absent words (MAWs) in the human genome showing that a significant number of previously discovered MAWs are in fact false-positives when a population's variants are considered.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8951Fast matching statistics in small space
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8952
Computing the matching statistics of a string S with respect to a string T on an alphabet of size sigma is a fundamental primitive for a number of large-scale string analysis applications, including the comparison of entire genomes, for which space is a pressing issue. This paper takes from theory to practice an existing algorithm that uses just O(|T|log{sigma}) bits of space, and that computes a compact encoding of the matching statistics array in O(|S|log{sigma}) time. The techniques used to speed up the algorithm are of general interest, since they optimize queries on the existence of a Weiner link from a node of the suffix tree, and parent operations after unsuccessful Weiner links. Thus, they can be applied to other matching statistics algorithms, as well as to any suffix tree traversal that relies on such calls. Some of our optimizations yield a matching statistics implementation that is up to three times faster than a plain version of the algorithm, depending on the similarity between S and T. In genomic datasets of practical significance we achieve speedups of up to 1.8, but our fastest implementations take on average twice the time of an existing code based on the LCP array. The key advantage is that our implementations need between one half and one fifth of the competitor's memory, and they approach comparable running times when S and T are very similar.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8952Practical lower and upper bounds for the Shortest Linear Superstring
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8953
Given a set P of words, the Shortest Linear Superstring (SLS) problem is an optimisation problem that asks for a superstring of P of minimal length. SLS has applications in data compression, where a superstring is a compact representation of P, and in bioinformatics where it models the first step of genome assembly. Unfortunately SLS is hard to solve (NP-hard) and to closely approximate (MAX-SNP-hard). If numerous polynomial time approximation algorithms have been devised, few articles report on their practical performance. We lack knowledge about how closely an approximate superstring can be from an optimal one in practice. Here, we exhibit a linear time algorithm that reports an upper and a lower bound on the length of an optimal superstring. The upper bound is the length of an approximate superstring. This algorithm can be used to evaluate beforehand whether one can get an approximate superstring whose length is close to the optimum for a given instance. Experimental results suggest that its approximation performance is orders of magnitude better than previously reported practical values. Moreover, the proposed algorithm remainso efficient even on large instances and can serve to explore in practice the approximability of SLS.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8953Experimental Study of Compressed Stack Algorithms in Limited Memory Environments
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8954
The compressed stack is a data structure designed by Barba et al. (Algorithmica 2015) that allows to reduce the amount of memory needed by a certain class of algorithms at the cost of increasing its runtime. In this paper we introduce the first implementation of this data structure and make its source code publicly available.Together with the implementation we analyse the performance of the compressed stack. In our synthetic experiments, considering different test scenarios and using data sizes ranging up to 2^{30} elements, we compare it with the classic (uncompressed) stack, both in terms of runtime and memory used.Our experiments show that the compressed stack needs significantly less memory than the usual stack (this difference is significant for inputs containing 2000 or more elements). Overall, with a proper choice of parameters, we can save a significant amount of space (from two to four orders of magnitude) with a small increase in the runtime (2.32 times slower on average than the classic stack). These results hold even in test scenarios specifically designed to be challenging for the compressed stack.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8954Restructuring Expression Dags for Efficient Parallelization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8955
In the field of robust geometric computation it is often necessary to make exact decisions based on inexact floating-point arithmetic. One common approach is to store the computation history in an arithmetic expression dag and to re-evaluate the expression with increasing precision until an exact decision can be made. We show that exact-decisions number types based on expression dags can be evaluated faster in practice through parallelization on multiple cores. We compare the impact of several restructuring methods for the expression dag on its running time in a parallel environment.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8955Enumerating Graph Partitions Without Too Small Connected Components Using Zero-suppressed Binary and Ternary Decision Diagrams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8956
Partitioning a graph into balanced components is important for several applications. For multi-objective problems, it is useful not only to find one solution but also to enumerate all the solutions with good values of objectives. However, there are a vast number of graph partitions in a graph, and thus it is difficult to enumerate desired graph partitions efficiently. In this paper, an algorithm to enumerate all the graph partitions such that all the weights of the connected components are at least a specified value is proposed. To deal with a large search space, we use zero-suppressed binary decision diagrams (ZDDs) to represent sets of graph partitions and we design a new algorithm based on frontier-based search, which is a framework to directly construct a ZDD. Our algorithm utilizes not only ZDDs but also ternary decision diagrams (TDDs) and realizes an operation which seems difficult to be designed only by ZDDs. Experimental results show that the proposed algorithm runs up to tens of times faster than an existing state-of-the-art algorithm.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8956Exact Algorithms for the Maximum Planar Subgraph Problem: New Models and Experiments
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8957
Given a graph G, the NP-hard Maximum Planar Subgraph problem asks for a planar subgraph of G with the maximum number of edges. The only known non-trivial exact algorithm utilizes Kuratowski's famous planarity criterion and can be formulated as an integer linear program (ILP) or a pseudo-boolean satisfiability problem (PBS). We examine three alternative characterizations of planarity regarding their applicability to model maximum planar subgraphs. For each, we consider both ILP and PBS variants, investigate diverse formulation aspects, and evaluate their practical performance.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8957A Linear-Time Algorithm for Finding Induced Planar Subgraphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8958
In this paper we study the problem of efficiently and effectively extracting induced planar subgraphs. Edwards and Farr proposed an algorithm with O(mn) time complexity to find an induced planar subgraph of at least 3n/(d+1) vertices in a graph of maximum degree d. They also proposed an alternative algorithm with O(mn) time complexity to find an induced planar subgraph graph of at least 3n/(bar{d}+1) vertices, where bar{d} is the average degree of the graph. These two methods appear to be best known when d and bar{d} are small. Unfortunately, they sacrifice accuracy for lower time complexity by using indirect indicators of planarity. A limitation of those approaches is that the algorithms do not implicitly test for planarity, and the additional costs of this test can be significant in large graphs. In contrast, we propose a linear-time algorithm that finds an induced planar subgraph of n-nu vertices in a graph of n vertices, where nu denotes the total number of vertices shared by the detected Kuratowski subdivisions. An added benefit of our approach is that we are able to detect when a graph is planar, and terminate the reduction. The resulting planar subgraphs also do not have any rigid constraints on the maximum degree of the induced subgraph. The experiment results show that our method achieves better performance than current methods on graphs with small skewness.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8958Fast Spherical Drawing of Triangulations: An Experimental Study of Graph Drawing Tools
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8959
We consider the problem of computing a spherical crossing-free geodesic drawing of a planar graph: this problem, as well as the closely related spherical parameterization problem, has attracted a lot of attention in the last two decades both in theory and in practice, motivated by a number of applications ranging from texture mapping to mesh remeshing and morphing. Our main concern is to design and implement a linear time algorithm for the computation of spherical drawings provided with theoretical guarantees. While not being aesthetically pleasing, our method is extremely fast and can be used as initial placer for spherical iterative methods and spring embedders. We provide experimental comparison with initial placers based on planar Tutte parameterization. Finally we explore the use of spherical drawings as initial layouts for (Euclidean) spring embedders: experimental evidence shows that this greatly helps to untangle the layout and to reach better local minima.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8959Fleet Management for Autonomous Vehicles Using Multicommodity Coupled Flows in Time-Expanded Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8960
VIPAFLEET is a framework to develop models and algorithms for managing a fleet of Individual Public Autonomous Vehicles (VIPA). We consider a homogeneous fleet of such vehicles distributed at specified stations in a closed site to supply internal transportation, where the vehicles can be used in different modes of circulation (tram mode, elevator mode, taxi mode). We treat in this paper a variant of the Online Pickup-and-Delivery Problem related to the taxi mode by means of multicommodity coupled flows in a time-expanded network and propose a corresponding integer linear programming formulation. This enables us to compute optimal offline solutions. However, to apply the well-known meta-strategy Replan to the online situation by solving a sequence of offline subproblems, the computation times turned out to be too long, so that we devise a heuristic approach h-Replan based on the flow formulation. Finally, we evaluate the performance of h-Replan in comparison with the optimal offline solution, both in terms of competitive analysis and computational experiments, showing that h-Replan computes reasonable solutions, so that it suits for the online situation.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8960The Steiner Multi Cycle Problem with Applications to a Collaborative Truckload Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8961
We introduce a new problem called Steiner Multi Cycle Problem that extends the Steiner Cycle problem in the same way the Steiner Forest extends the Steiner Tree problem. In this problem we are given a complete weighted graph G=(V,E), which respects the triangle inequality, a collection of terminal sets {T_1,..., T_k}, where for each a in [k] we have a subset T_a of V and these terminal sets are pairwise disjoint. The problem is to find a set of disjoint cycles of minimum cost such that for each a in [k], all vertices of T_a belong to a same cycle. Our main interest is in a restricted case where |T_a| = 2, for each a in [k], which models a collaborative less-than-truckload problem with pickup and delivery. In this problem, we have a set of agents where each agent is associated with a set T_a containing a pair of pickup and delivery vertices. This problem arises in the scenario where a company has to periodically exchange goods between two different locations, and different companies can collaborate to create a route that visits all its pairs of locations sharing the total cost of the route. We show that even the restricted problem is NP-Hard, and present some heuristics to solve it. In particular, a constructive heuristic called Refinement Search, which uses geometric properties to determine if agents are close to each other. We performed computational experiments to compare this heuristic to a GRASP based heuristic. The Refinement Search obtained the best solutions in little computational time.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8961Real-Time Traffic Assignment Using Fast Queries in Customizable Contraction Hierarchies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8962
Given an urban road network and a set of origin-destination (OD) pairs, the traffic assignment problem asks for the traffic flow on each road segment. A common solution employs a feasible-direction method, where the direction-finding step requires many shortest-path computations. In this paper, we significantly accelerate the computation of flow patterns, enabling interactive transportation and urban planning applications. We achieve this by revisiting and carefully engineering known speedup techniques for shortest paths, and combining them with customizable contraction hierarchies. In particular, our accelerated elimination tree search is more than an order of magnitude faster for local queries than the original algorithm, and our centralized search speeds up batched point-to-point shortest paths by a factor of up to 6. These optimizations are independent of traffic assignment and can be generally used for (batched) point-to-point queries. In contrast to prior work, our evaluation uses real-world data for all parts of the problem. On a metropolitan area encompassing more than 2.7 million inhabitants, we reduce the flow-pattern computation for a typical two-hour morning peak from 76.5 to 10.5 seconds on one core, and 4.3 seconds on four cores. This represents a speedup of 18 over the state of the art, and three orders of magnitude over the Dijkstra-based baseline.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8962Engineering Motif Search for Large Motifs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8963
Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8963Finding Hamiltonian Cycle in Graphs of Bounded Treewidth: Experimental Evaluation
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8964
The notion of treewidth, introduced by Robertson and Seymour in their seminal Graph Minors series, turned out to have tremendous impact on graph algorithmics. Many hard computational problems on graphs turn out to be efficiently solvable in graphs of bounded treewidth: graphs that can be sweeped with separators of bounded size. These efficient algorithms usually follow the dynamic programming paradigm.In the recent years, we have seen a rapid and quite unexpected development of involved techniques for solving various computational problems in graphs of bounded treewidth. One of the most surprising directions is the development of algorithms for connectivity problems that have only single-exponential dependency (i.e., 2^{{O}(t)}) on the treewidth in the running time bound, as opposed to slightly superexponential (i.e., 2^{{O}(t log t)}) stemming from more naive approaches. In this work, we perform a thorough experimental evaluation of these approaches in the context of one of the most classic connectivity problem, namely Hamiltonian Cycle.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8964Isomorphism Test for Digraphs with Weighted Edges
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8965
Colour refinement is at the heart of all the most efficient graph isomorphism software packages. In this paper we present a method for extending the applicability of refinement algorithms to directed graphs with weighted edges. We use {Traces} as a reference software, but the proposed solution is easily transferrable to any other refinement-based graph isomorphism tool in the literature. We substantiate the claim that the performances of the original algorithm remain substantially unchanged by showing experiments for some classes of benchmark graphs.Mon, 18 Jun 2018 15:22:52 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8965LIPIcs, Volume 100, FUN'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8931
LIPIcs, Volume 100, FUN'18, Complete VolumeMon, 18 Jun 2018 08:12:09 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8931LIPIcs, Volume 99, SoCG'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8930
LIPIcs, Volume 99, SoCG'18, Complete VolumeMon, 18 Jun 2018 08:11:42 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8930Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8893
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8893OMG: GW, CLT, CRT and CFTP (Flajolet Award Lecture)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8894
After a brief review of the main results on Galton-Watson trees from the past two decades, we will discuss a few recent results in the field.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8894Assumptionless Bounds for Random Trees (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8895
Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8895Making Squares - Sieves, Smooth Numbers, Cores and Random Xorsat (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8896
Since the advent of fast computers, much attention has been paid to practical factoring algorithms. Several of these algorithms set out to find two squares x^2, y^2 that are congruent modulo the number n we wish to factor, and are non-trivial in the sense that x is not equivalent to +/- y mod n. In 1994, this prompted Pomerance to ask the following question.Let a_1, a_2, ... be random integers, chosen independently and uniformly from a set {1, ... x}. Let N be the smallest index such that {a_1, ... , a_N} contains a subsequence, the product of whose elements is a perfect square. What can you say about this random number N? In particular, give bounds N_0 and N_1 such that P(N_0 <= N <= N_1)-> 1 as x -> infty. Pomerance also gave bounds N_0 and N_1 with log N_0 ~ log N_1.In 2012, Croot, Granville, Pemantle and Tetali significantly improved these bounds of Pomerance, bringing them within a constant of each other, and conjectured that their upper bound is sharp. In a recent paper, Paul Balister, Rob Morris and I have proved this conjecture. In the talk I shall review some related results and sketch some of the ideas used in our proof.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8896Bootstrap Percolation and Galton-Watson Trees (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8897
Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8897Thinking in Advance About the Last Algorithm We Ever Need to Invent (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8898
We survey current discussions about possibilities and risks associated with an artificial intelligence breakthrough on the level that puts humanity in the situation where we are no longer foremost on the planet in terms of general intelligence. The importance of thinking in advance about such an event is emphasized. Key issues include when and how suddenly superintelligence is likely to emerge, the goals and motivations of a superintelligent machine, and what we can do to improve the chances of a favorable outcome.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8898Patterns in Random Permutations Avoiding Some Other Patterns (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8899
Consider a random permutation drawn from the set of permutations of length n that avoid a given set of one or several patterns of length 3. We show that the number of occurrences of another pattern has a limit distribution, after suitable scaling. In several cases, the limit is normal, as it is in the case of unrestricted random permutations; in other cases the limit is a non-normal distribution, depending on the studied pattern. In the case when a single pattern of length 3 is forbidden, the limit distributions can be expressed in terms of a Brownian excursion.The analysis is made case by case; unfortunately, no general method is known, and no general pattern emerges from the results.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8899Vanishing of Cohomology Groups of Random Simplicial Complexes (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8900
We consider k-dimensional random simplicial complexes that are generated from the binomial random (k+1)-uniform hypergraph by taking the downward-closure, where k >= 2. For each 1 <= j <= k-1, we determine when all cohomology groups with coefficients in F_2 from dimension one up to j vanish and the zero-th cohomology group is isomorphic to F_2. This property is not monotone, but nevertheless we show that it has a single sharp threshold. Moreover, we prove a hitting time result, relating the vanishing of these cohomology groups to the disappearance of the last minimal obstruction. Furthermore, we study the asymptotic distribution of the dimension of the j-th cohomology group inside the critical window. As a corollary, we deduce a hitting time result for a different model of random simplicial complexes introduced in [Linial and Meshulam, Combinatorica, 2006], a result which has only been known for dimension two [Kahle and Pittel, Random Structures Algorithms, 2016].Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8900Periods in Subtraction Games (Keynote Speakers)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8901
We discuss the structure of periods in subtraction games. In particular, we discuss ways that a computational approach yields insights to the periods that emerge in the asymptotic structure of these combinatorial games.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8901Permutations in Binary Trees and Split Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8902
We investigate the number of permutations that occur in random node labellings of trees. This is a generalisation of the number of subpermutations occuring in a random permutation. It also generalises some recent results on the number of inversions in randomly labelled trees [Cai et al., 2017]. We consider complete binary trees as well as random split trees a large class of random trees of logarithmic height introduced by Devroye [Devroye, 1998]. Split trees consist of nodes (bags) which can contain balls and are generated by a random trickle down process of balls through the nodes.For complete binary trees we show that asymptotically the cumulants of the number of occurrences of a fixed permutation in the random node labelling have explicit formulas. Our other main theorem is to show that for a random split tree with high probability the cumulants of the number of occurrences are asymptotically an explicit parameter of the split tree. For the proof of the second theorem we show some results on the number of embeddings of digraphs into split trees which may be of independent interest.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8902Analytic Combinatorics of Lattice Paths with Forbidden Patterns: Asymptotic Aspects and Borges's Theorem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8903
In a companion article dedicated to the enumeration aspects, we showed how to obtain closed form formulas for the generating functions of walks, bridges, meanders, and excursions avoiding any fixed word (a pattern p). The autocorrelation polynomial of this forbidden pattern p (as introduced by Guibas and Odlyzko in 1981, in the context of regular expressions) plays a crucial role. In this article, we get the asymptotics of these walks. We also introduce a trivariate generating function (length, final altitude, number of occurrences of p), for which we derive a closed form. We prove that the number of occurrences of p is normally distributed: This is what Flajolet and Sedgewick call an instance of Borges's theorem.We thus extend and refine the study by Banderier and Flajolet in 2002 on lattice paths, and we unify several dozens of articles which investigated patterns like peaks, valleys, humps, etc., in Dyck and Motzkin paths. Our approach relies on methods of analytic combinatorics, and on a matricial generalization of the kernel method. The situation is much more involved than in the Banderier-Flajolet work: forbidden patterns lead to a wider zoology of asymptotic behaviours, and we classify them according to the geometry of a Newton polygon associated with these constrained walks, and we analyse what are the universal phenomena common to all these models of lattice paths avoiding a pattern.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8903
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8904
Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8904Diagonal Asymptotics for Symmetric Rational Functions via ACSV
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8905
We consider asymptotics of power series coefficients of rational functions of the form 1/Q where Q is a symmetric multilinear polynomial. We review a number of such cases from the literature, chiefly concerned either with positivity of coefficients or diagonal asymptotics. We then analyze coefficient asymptotics using ACSV (Analytic Combinatorics in Several Variables) methods. While ACSV sometimes requires considerable overhead and geometric computation, in the case of symmetric multilinear rational functions there are some reductions that streamline the analysis. Our results include diagonal asymptotics across entire classes of functions, for example the general 3-variable case and the Gillis-Reznick-Zeilberger (GRZ) case, where the denominator in terms of elementary symmetric functions is 1 - e_1 + c e_d in any number d of variables. The ACSV analysis also explains a discontinuous drop in exponential growth rate for the GRZ class at the parameter value c = (d-1)^{d-1}, previously observed for d=4 only by separately computing diagonal recurrences for critical and noncritical values of c.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8905Asymptotic Distribution of Parameters in Random Maps
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8906
We consider random rooted maps without regard to their genus, with fixed large number of edges, and address the problem of limiting distributions for six different parameters: vertices, leaves, loops, root edges, root isthmus, and root vertex degree. Each of these leads to a different limiting distribution, varying from (discrete) geometric and Poisson distributions to different continuous ones: Beta, normal, uniform, and an unusual distribution whose moments are characterised by a recursive triangular array.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8906Beyond Series-Parallel Concurrent Systems: The Case of Arch Processes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8907
In this paper we focus on concurrent processes built on synchronization by means of futures. This concept is an abstraction for processes based on a main execution thread but allowing to delay some computations. The structure of a general concurrent process is a directed acyclic graph (DAG). Since the quantitative study of increasingly labeled DAG (directly related to processes) seems out of reach (this is a #P-complete problem), we restrict ourselves to the study of arch processes, a simplistic model of processes with futures. They are based on two parameters related to their sizes and their numbers of arches. The increasingly labeled structures seems not to be specifiable in the classical sense of Analytic Combinatorics, but we manage to derive a recurrence equation for the enumeration. For this model we first exhibit an exact and an asymptotic formula for the number of runs of a given process. The second main contribution is composed of a uniform random sampler algorithm and an unranking one that allow efficient generation and exhaustive enumeration of the runs of a given arch process.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8907Inversions in Split Trees and Conditional Galton-Watson Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8908
We study I(T), the number of inversions in a tree T with its vertices labeled uniformly at random. We first show that the cumulants of I(T) have explicit formulas. Then we consider X_n, the normalized version of I(T_n), for a sequence of trees T_n. For fixed T_n's, we prove a sufficient condition for X_n to converge in distribution. For T_n being split trees [Devroye, 1999], we show that X_n converges to the unique solution of a distributional equation. Finally, when T_n's are conditional Galton-Watson trees, we show that X_n converges to a random variable defined in terms of Brownian excursions. Our results generalize and extend previous work by Panholzer and Seitz [Panholzer and Seitz, 2012].Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8908The Cover Time of a Biased Random Walk on a Random Cubic Graph
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8909
We study a random walk that prefers to use unvisited edges in the context of random cubic graphs, i.e., graphs chosen uniformly at random from the set of 3-regular graphs. We establish asymptotically correct estimates for the vertex and edge cover times, these being n log n and 3/2 n log n respectively.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8909
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8910
Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8910Maximal Independent Sets and Maximal Matchings in Series-Parallel and Related Graph Classes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8911
We provide combinatorial decompositions as well as asymptotic tight estimates for two maximal parameters: the number and average size of maximal independent sets and maximal matchings in series-parallel graphs (and related graph classes) with n vertices. In particular, our results extend previous results of Meir and Moon for trees [Meir, Moon: On maximal independent sets of nodes in trees, Journal of Graph Theory 1988]. We also show that these two parameters converge to a central limit law.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8911The Number of Double Triangles in Random Planar Maps
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8912
The purpose of this paper is to provide a central limit theorem for the number of occurrences of double triangles in random planar maps. This is the first result of this kind that goes beyond face counts of given valency. The method is based on generating functions, an involved combinatorial decomposition scheme that leads to a system of catalytic functional equations and an analytic extension of the Quadratic Method to systems of equations.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8912Fixed Partial Match Queries in Quadtrees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8913
Several recent papers in the literature have addressed the analysis of the cost P_{n,q} of partial match search for a given fixed query q - that has s out of K specified coordinates - in different multidimensional data structures. Indeed, detailed asymptotic estimates for the main term in the expected cost P_{n,q} = E {P_{n,q}} in standard and relaxed K-d trees are known (for any dimension K and any number s of specified coordinates), as well as stronger distributional results on P_{n,q} for standard 2-d trees and 2-dimensional quadtrees. In this work we derive a precise asymptotic estimate for the main order term of P_{n,q} in quadtrees, for any values of K and s, 0 < s < K, under the assumption that the limit of P_{n,q}/n^alpha when n -> infty exists, where alpha is the exponent of n in the expected cost of a random partial match query with s specified coordinates in a random K-dimensional quadtree.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8913On the Tails of the Limiting QuickSort Density
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8914
We give upper and lower asymptotic bounds for the left tail and for the right tail of the continuous limiting QuickSort density f that are nearly matching in each tail. The bounds strengthen results from a paper of Svante Janson (2015) concerning the corresponding distribution function F. Furthermore, we obtain similar upper bounds on absolute values of derivatives of f of each order.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8914Stationary Distribution Analysis of a Queueing Model with Local Choice
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8915
The paper deals with load balancing between one-server queues on a circle by a local choice policy. Each one-server queue has a Poissonian arrival of customers. When a customer arrives at a queue, he joins the least loaded queue between this queue and the next one, ties solved at random. Service times have exponential distribution. The system is stable if the arrival-to-service rate ratio called load is less than one. When the load tends to zero, we derive the first terms of the expansion in this parameter for the stationary probabilities that a queue has 0 to 3 customers. We investigate the error, comparing these expansion results to numerical values obtained by simulations. Then we provide the asymptotics, as the load tends to zero, for the stationary probabilities of the queue length, for a fixed number of queues. It quantifies the difference between policies with this local choice, no choice and the choice between two queues chosen at random.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8915Refined Asymptotics for the Number of Leaves of Random Point Quadtrees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8916
In the early 2000s, several phase change results from distributional convergence to distributional non-convergence have been obtained for shape parameters of random discrete structures. Recently, for those random structures which admit a natural martingale process, these results have been considerably improved by obtaining refined asymptotics for the limit behavior. In this work, we propose a new approach which is also applicable to random discrete structures which do not admit a natural martingale process. As an example, we obtain refined asymptotics for the number of leaves in random point quadtrees. More applications, for example to shape parameters in generalized m-ary search trees and random gridtrees, will be discussed in the journal version of this extended abstract.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8916Slow Convergence of Ising and Spin Glass Models with Well-Separated Frustrated Vertices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8917
Many physical models undergo phase transitions as some parameter of the system is varied. This phenomenon has bearing on the convergence times for local Markov chains walking among the configurations of the physical system. One of the most basic examples of this phenomenon is the ferromagnetic Ising model on an n x n square lattice region Lambda with mixed boundary conditions. For this spin system, if we fix the spins on the top and bottom sides of the square to be + and the left and right sides to be -, a standard Peierls argument based on energy shows that below some critical temperature t_c, any local Markov chain M requires time exponential in n to mix.Spin glasses are magnetic alloys that generalize the Ising model by specifying the strength of nearest neighbor interactions on the lattice, including whether they are ferromagnetic or antiferromagnetic. Whenever a face of the lattice is bounded by an odd number of edges with ferromagnetic interactions, the face is considered frustrated because the local competing objectives cannot be simultaneously satisfied. We consider spin glasses with exactly four well-separated frustrated faces that are symmetric around the center of the lattice region under 90 degree rotations. We show that local Markov chains require exponential time for all spin glasses in this class. This class includes the ferromagnetic Ising model with mixed boundary conditions described above, where the frustrated faces are on the boundary. The standard Peierls argument breaks down when the frustrated faces are on the interior of Lambda and yields weaker results when they are on the boundary of Lambda but not near the corners. We show that there is a universal temperature T below which M will be slow for all spin glasses with four well-separated frustrated faces. Our argument shows that there is an exponentially small cut indicated by the free energy, carefully exploiting both entropy and energy to establish a small bottleneck in the state space to establish slow mixing.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8917On the Number of Variables in Special Classes of Random Lambda-Terms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8918
We investigate the number of variables in two special subclasses of lambda-terms that are restricted by a bound of the number of abstractions between a variable and its binding lambda, and by a bound of the nesting levels of abstractions, respectively. These restrictions are on the one hand very natural from a practical point of view, and on the other hand they simplify the counting problem compared to that of unrestricted lambda-terms in such a way that the common methods of analytic combinatorics are applicable.We will show that the total number of variables is asymptotically normally distributed for both subclasses of lambda-terms with mean and variance asymptotically equal to C_1 n and C_2 n, respectively, where the constants C_1 and C_2 depend on the bound that has been imposed. So far we just derived closed formulas for the constants in case of the class of lambda-terms with a bounded number of abstractions between each variable and its binding lambda. However, for the other class of lambda-terms that we consider, namely lambda-terms with a bounded number of nesting levels of abstractions, we investigate the number of variables in the different abstraction levels and thereby exhibit very interesting results concerning the distribution of the variables within those lambda-terms.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8918Counting Ascents in Generalized Dyck Paths
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8919
Non-negative Lukasiewicz paths are special two-dimensional lattice paths never passing below their starting altitude which have only one single special type of down step. They are well-known and -studied combinatorial objects, in particular due to their bijective relation to trees with given node degrees.We study the asymptotic behavior of the number of ascents (i.e., the number of maximal sequences of consecutive up steps) of given length for classical subfamilies of general non-negative Lukasiewicz paths: those with arbitrary ending altitude, those ending on their starting altitude, and a variation thereof. Our results include precise asymptotic expansions for the expected number of such ascents as well as for the corresponding variance.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8919Analysis of Summatory Functions of Regular Sequences: Transducer and Pascal's Rhombus
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8920
The summatory function of a q-regular sequence in the sense of Allouche and Shallit is analysed asymptotically. The result is a sum of periodic fluctuations multiplied by a scaling factor. Each summand corresponds to an eigenvalues of absolute value larger than the joint spectral radius of the matrices of a linear representation of the sequence. The Fourier coefficients of the fluctuations are expressed in terms of residues of the corresponding Dirichlet generating function. A known pseudo Tauberian argument is extended in order to overcome convergence problems in Mellin-Perron summation.Two examples are discussed in more detail: The case of sequences defined as the sum of outputs written by a transducer when reading a q-ary expansion of the input and the number of odd entries in the rows of Pascal's rhombus.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8920Distribution of the Number of Corners in Tree-like and Permutation Tableaux
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8921
In this abstract, we study tree-like tableaux and some of their probabilistic properties. Tree-like tableaux are in bijection with other combinatorial structures, including permutation tableaux, and have a connection to the partially asymmetric simple exclusion process (PASEP), an important model of interacting particles system. In particular, in the context of tree-like tableaux, a corner corresponds to a node occupied by a particle that could jump to the right while inner corners indicate a particle with an empty node to its left. Thus, the total number of corners represents the number of nodes at which PASEP can move, i.e., the total current activity of the system. As the number of inner corners and regular corners is connected, we limit our discussion to just regular corners and show that, asymptotically, the number of corners in a tableaux of length n is normally distributed. Furthermore, since the number of corners in tree-like tableaux are closely related to the number of corners in permutation tableaux, we will discuss the corners in the context of the latter tableaux.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8921Asymptotic Expansions for Sub-Critical Lagrangean Forms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8922
Asymptotic expansions for the Taylor coefficients of the Lagrangean form phi(z)=zf(phi(z)) are examined with a focus on the calculations of the asymptotic coefficients. The expansions are simple and useful, and we discuss their use in some enumerating sequences in trees, lattice paths and planar maps.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8922Periods of Iterations of Mappings over Finite Fields with Restricted Preimage Sizes
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8923
Let f be a uniformly random element of the set of all mappings from [n] = {1, ..., n} to itself. Let T(f) and B(f) denote, respectively, the least common multiple and the product of the lengths of the cycles of f. Harris proved in 1973 that log T converges in distribution to a standard normal distribution and, in 2011, Schmutz obtained an asymptotic estimate on the logarithm of the expectation of T and B over all mappings on n nodes. We obtain analogous results for uniform random mappings on n = kr nodes with preimage sizes restricted to a set of the form {0,k}, where k = k(r) >= 2. This is motivated by the use of these classes of mappings as heuristic models for the statistics of polynomials of the form x^k + a over the integers modulo p, where k divides p - 1. We exhibit and discuss our numerical results on this heuristic.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8923
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8924
Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8924Counting Planar Tanglegrams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8925
Tanglegrams are structures consisting of two binary rooted trees with the same number of leaves and a perfect matching between the leaves of the two trees. We say that a tanglegram is planar if it can be drawn in the plane without crossings. Using a blend of combinatorial and analytic techniques, we determine an asymptotic formula for the number of planar tanglegrams with n leaves on each side.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8925Asymptotic Normality of Almost Local Functionals in Conditioned Galton-Watson Trees
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8926
An additive functional of a rooted tree is a functional that can be calculated recursively as the sum of the values of the functional over the branches, plus a certain toll function. Janson recently proved a central limit theorem for additive functionals of conditioned Galton-Watson trees under the assumption that the toll function is local, i.e. only depends on a fixed neighbourhood of the root. We extend his result to functionals that are almost local, thus covering a wider range of functionals. Our main result is illustrated by two explicit examples: the (logarithm of) the number of matchings, and a functional stemming from a tree reduction process that was studied by Hackl, Heuberger, Kropf, and Prodinger.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8926Local Limits of Large Galton-Watson Trees Rerooted at a Random Vertex
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8927
We prove limit theorems describing the asymptotic behaviour of a typical vertex in random simply generated trees as their sizes tends to infinity. In the standard case of a critical Galton-Watson tree conditioned to be large, the limit is the invariant random sin-tree constructed by Aldous (1991). Our main contribution lies in the condensation regime where vertices of macroscopic degree appear. Here we describe in complete generality the asymptotic local behaviour from a random vertex up to its first ancestor with "large" degree. Beyond this distinguished ancestor, different behaviours may occur, depending on the branching weights. In a subregime of complete condensation, we obtain convergence toward a novel limit tree, that describes the asymptotic shape of the vicinity of the full path from a random vertex to the root vertex. This includes the important case where the offspring distribution follows a power law up to a factor that varies slowly at infinity.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8927The Depoissonisation Quintet: Rice-Poisson-Mellin-Newton-Laplace
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8928
This paper is devoted to the Depoissonisation process which is central in various analyses of the AofA domain. We first recall in Section 1 the two possible paths that may be used in this process, namely the Depoissonisation path and the Rice path. The two paths are rarely described for themselves in the literature, and general methodological results are often difficult to isolate amongst particular results that are more directed towards various applications. The main results for the Depoissonisation path are scattered in at least five papers, with a chronological order which does not correspond to the logical order of the method. The Rice path is also almost always presented with a strong focus towards possible applications. It is often very easy to apply, but it needs a tameness condition, which appears a priori to be quite restrictive, and is not deeply studied in the literature. This explains why the Rice path is very often undervalued.Second, the two paths are not precisely compared, and the situation creates various "feelings": some people see the tools that are used in the two paths as quite different, and strongly prefer one of the two paths; some others think the two paths are almost the same, with just a change of vocabulary. It is thus useful to compare the two paths and the tools they use. This will be done in Sections 2 and 3. We also "follow" this comparison on a precise problem, related to the analysis of tries, introduced in Section 1.7.The paper also exhibits in Section 4 a new framework, of practical use, where the tameness condition of Rice path is proven to hold. This approach, perhaps of independent interest, deals with the shifting of sequences and then the inverse Laplace transform, which does not seem of classical use in this context. It performs very simple computations. This adds a new method to the Depoissonisation context and explains the title of the paper. We then conclude that the Rice path is both of easy and practical use: even though (much?) less general than the Depoissonisation path, it is easier to apply.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8928Average Cost of QuickXsort with Pivot Sampling
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8929
QuickXsort is a strategy to combine Quicksort with another sorting method X so that the result has essentially the same comparison cost as X in isolation, but sorts in place even when X requires a linear-size buffer. We solve the recurrence for QuickXsort precisely up to the linear term including the optimization to choose pivots from a sample of k elements. This allows to immediately obtain overall average costs using only the average costs of sorting method X (as if run in isolation). We thereby extend and greatly simplify the analysis of QuickHeapsort and QuickMergesort with practically efficient pivot selection, and give the first tight upper bounds including the linear term for such methods.Fri, 08 Jun 2018 10:40:20 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8929Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8860
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8860Dimension Reduction for Polynomials over Gaussian Space and Applications
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8861
We introduce a new technique for reducing the dimension of the ambient space of low-degree polynomials in the Gaussian space while preserving their relative correlation structure. As an application, we obtain an explicit upper bound on the dimension of an epsilon-optimal noise-stable Gaussian partition. In fact, we address the more general problem of upper bounding the number of samples needed to epsilon-approximate any joint distribution that can be non-interactively simulated from a correlated Gaussian source. Our results significantly improve (from Ackermann-like to "merely" exponential) the upper bounds recently proved on the above problems by De, Mossel & Neeman [CCC 2017, SODA 2018 resp.] and imply decidability of the larger alphabet case of the gap non-interactive simulation problem posed by Ghazi, Kamath & Sudan [FOCS 2016].Our technique of dimension reduction for low-degree polynomials is simple and can be seen as a generalization of the Johnson-Lindenstrauss lemma and could be of independent interest.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8861Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8862
Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8862Small Normalized Boolean Circuits for Semi-disjoint Bilinear Forms Require Logarithmic Conjunction-depth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8863
We consider normalized Boolean circuits that use binary operations of disjunction and conjunction, and unary negation, with the restriction that negation can be only applied to input variables. We derive a lower bound trade-off between the size of normalized Boolean circuits computing Boolean semi-disjoint bilinear forms and their conjunction-depth (i.e., the maximum number of and-gates on a directed path to an output gate). In particular, we show that any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n-dimensional Boolean vector convolution has Omega(n^{2-4 epsilon}) and-gates. Analogously, any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n x n Boolean matrix product has Omega(n^{3-4 epsilon}) and-gates. We complete our lower-bound trade-offs with upper-bound trade-offs of similar form yielded by the known fast algebraic algorithms.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8863On the Complexity of the Cayley Semigroup Membership Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8864
We investigate the complexity of deciding, given a multiplication table representing a semigroup S, a subset X of S and an element t of S, whether t can be expressed as a product of elements of X. It is well-known that this problem is {NL}-complete and that the more general Cayley groupoid membership problem, where the multiplication table is not required to be associative, is {P}-complete. For groups, the problem can be solved in deterministic log-space which raised the question of determining the exact complexity of this variant. Barrington, Kadau, Lange and McKenzie showed that for Abelian groups and for certain solvable groups, the problem is contained in the complexity class {FOLL} and they concluded that these variants are not hard for any complexity class containing {Parity}. The more general case of arbitrary groups remained open. In this work, we show that for both groups and for commutative semigroups, the problem is solvable in {qAC}^0 (quasi-polynomial size circuits of constant depth with unbounded fan-in) and conclude that these variants are also not hard for any class containing {Parity}. Moreover, we prove that {NL}-completeness already holds for the classes of 0-simple semigroups and nilpotent semigroups. Together with our results on groups and commutative semigroups, we prove the existence of a natural class of finite semigroups which generates a variety of finite semigroups with {NL}-complete Cayley semigroup membership, while the Cayley semigroup membership problem for the class itself is not {NL}-hard. We also discuss applications of our technique to {FOLL}.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8864Worst-Case to Average Case Reductions for the Distance to a Code
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8865
Algebraic proof systems reduce computational problems to problems about estimating the distance of a sequence of functions vec{u}=(u_1,..., u_k), given as oracles, from a linear error correcting code V. The soundness of such systems relies on methods that act "locally" on vec{u} and map it to a single function u^* that is, roughly, as far from V as are u_1,..., u_k.Motivated by these applications to efficient proof systems, we study a natural worst-case to average-case reduction of distance for linear spaces, and show several general cases in which the following statement holds: If some member of a linear space U=span(u_1,...,u_k) is delta-far from (all elements) of V in relative Hamming distance, then nearly all elements of U are (1-epsilon)delta-far from V; the value of epsilon depends only on the distance of the code V and approaches 0 as that distance approaches 1. Our results improve on the previous state-of-the-art which showed that nearly all elements of U are 1/2delta-far from V [Rothblum, Vadhan and Wigderson, STOC 2013].When V is a Reed-Solomon (RS) code, as is often the case for algebraic proof systems, we show how to boost distance via a new "local" transformation that may be useful elsewhere. Relying on the affine-invariance of V, we map a vector u to a random linear combination of affine transformations of u, and show this process amplifies distance from V. Assuming V is an RS code with sufficiently large distance, this amplification process converts a function u that is somewhat far from V to one that is (1-epsilon)-far from V; as above, epsilon depends only on the distance of V and approaches 0 as the distance of V approaches 1.We give two concrete application of these techniques. First, we revisit the axis-parallel low-degree test for bivariate polynomials of [Polischuk-Spielman, STOC 1994] and prove a "list-decoding" type result for it, when the degree of one axis is extremely small. This result is similar to the recent list-decoding-regime result of [Chiesa, Manohar and Shinkar, RANDOM 2017] but is proved using different techniques, and allows the degree in one axis to be arbitrarily large. Second, we improve the soundness analysis of the recent RS proximity testing protocol of [Ben-Sasson et al., ICALP 2018] and extend it to the "list-decoding" regime, bringing it closer to the Johnson bound.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8865A Tight Lower Bound for Entropy Flattening
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8866
Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8866Complexity Classification of Conjugated Clifford Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8867
Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and pi/4 phase gates - play a central role in the study of quantum computation. However, their computational power is limited: a well-known result of Gottesman and Knill states that Clifford circuits are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be efficiently classically simulated to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, by standard techniques, this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture. This work can be seen as progress towards classifying the computational power of all restricted quantum gate sets.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8867Efficient Batch Verification for UP
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8868
Consider a setting in which a prover wants to convince a verifier of the correctness of k NP statements. For example, the prover wants to convince the verifier that k given integers N_1,...,N_k are all RSA moduli (i.e., products of equal length primes). Clearly this problem can be solved by simply having the prover send the k NP witnesses, but this involves a lot of communication. Can interaction help? In particular, is it possible to construct interactive proofs for this task whose communication grows sub-linearly with k?Our main result is such an interactive proof for verifying the correctness of any k UP statements (i.e., NP statements that have a unique witness). The proof-system uses only a constant number of rounds and the communication complexity is k^delta * poly(m), where delta>0 is an arbitrarily small constant, m is the length of a single witness, and the poly term refers to a fixed polynomial that only depends on the language and not on delta. The (honest) prover strategy can be implemented in polynomial-time given access to the k (unique) witnesses.Our proof leverages "interactive witness verification" (IWV), a new type of proof-system that may be of independent interest. An IWV is a proof-system in which the verifier needs to verify the correctness of an NP statement using: (i) a sublinear number of queries to an alleged NP witness, and (ii) a short interaction with a powerful but untrusted prover. In contrast to the setting of PCPs and Interactive PCPs, here the verifier only has access to the raw NP witness, rather than some encoding thereof.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8868Two-Player Entangled Games are NP-Hard
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8869
We show that it is NP-hard to approximate, to within an additive constant, the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length. As a corollary, the inclusion NEXP subseteq MIP^*, first shown by Ito and Vidick (FOCS'12) with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test of Raz and Safra (STOC'97) against two entangled provers.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8869New Hardness Results for the Permanent Using Linear Optics
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8870
In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is #P-hard. Aaronson's proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant's seminal proof of the same fact in 1979. Nevertheless, it did not show #P-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of new results about matrix permanents that are derived primarily via these linear optical techniques.First, we show that the problem of computing the permanent of a real orthogonal matrix is #P-hard. Much like Aaronson's original proof, this implies that even a multiplicative approximation remains #P-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field F_{p^4} for p != 2, 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan.Finally, we use more elementary arguments to prove #P-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8870Earthmover Resilience and Testing in Ordered Structures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8871
One of the main challenges in property testing is to characterize those properties that are testable with a constant number of queries. For unordered structures such as graphs and hypergraphs this task has been mostly settled. However, for ordered structures such as strings, images, and ordered graphs, the characterization problem seems very difficult in general.In this paper, we identify a wide class of properties of ordered structures - the earthmover resilient (ER) properties - and show that the "good behavior" of such properties allows us to obtain general testability results that are similar to (and more general than) those of unordered graphs. A property P is ER if, roughly speaking, slight changes in the order of the elements in an object satisfying P cannot make this object far from P. The class of ER properties includes, e.g., all unordered graph properties, many natural visual properties of images, such as convexity, and all hereditary properties of ordered graphs and images.A special case of our results implies, building on a recent result of Alon and the authors, that the distance of a given image or ordered graph from any hereditary property can be estimated (with good probability) up to a constant additive error, using a constant number of queries.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8871Reordering Rule Makes OBDD Proof Systems Stronger
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8872
Atserias, Kolaitis, and Vardi showed that the proof system of Ordered Binary Decision Diagrams with conjunction and weakening, OBDD(^, weakening), simulates CP^* (Cutting Planes with unary coefficients). We show that OBDD(^, weakening) can give exponentially shorter proofs than dag-like cutting planes. This is proved by showing that the Clique-Coloring tautologies have polynomial size proofs in the OBDD(^, weakening) system.The reordering rule allows changing the variable order for OBDDs. We show that OBDD(^, weakening, reordering) is strictly stronger than OBDD(^, weakening). This is proved using the Clique-Coloring tautologies, and by transforming tautologies using coded permutations and orification. We also give CNF formulas which have polynomial size OBDD(^) proofs but require superpolynomial (actually, quasipolynomial size) resolution proofs, and thus we partially resolve an open question proposed by Groote and Zantema.Applying dag-like and tree-like lifting techniques to the mentioned results, we completely analyze which of the systems among CP^*, OBDD(^), OBDD(^, reordering), OBDD(^, weakening) and OBDD(^, weakening, reordering) polynomially simulate each other. For dag-like proof systems, some of our separations are quasipolynomial and some are exponential; for tree-like systems, all of our separations are exponential.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8872Testing Linearity against Non-Signaling Strategies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8873
Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in Physics as a strict generalization of quantum strategies to understand the power and limitations of Nature's apparent non-locality. Recently, they have received attention in Theoretical Computer Science due to connections to Complexity and Cryptography.We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions.Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that "local views" follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of Quantum Mechanics as a tool to describe various non-local phenomena.Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8873Hardness of Function Composition for Semantic Read once Branching Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8874
In this work, we study time/space trade-offs for function composition. We prove asymptotically optimal lower bounds for function composition in the setting of nondeterministic read once branching programs, for the syntactic model as well as the stronger semantic model of read-once nondeterministic computation. We prove that such branching programs for solving the tree evaluation problem over an alphabet of size k requires size roughly k^{Omega(h)}, i.e space Omega(h log k). Our lower bound nearly matches the natural upper bound which follows the best strategy for black-white pebbling the underlying tree. While previous super-polynomial lower bounds have been proven for read-once nondeterministic branching programs (for both the syntactic as well as the semantic models), we give the first lower bounds for iterated function composition, and in these models our lower bounds are near optimal.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8874On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8875
In this paper we study the (Bichromatic) Maximum Inner Product Problem (Max-IP), in which we are given sets A and B of vectors, and the goal is to find a in A and b in B maximizing inner product a * b. Max-IP is very basic and serves as the base problem in the recent breakthrough of [Abboud et al., FOCS 2017] on hardness of approximation for polynomial-time problems. It is also used (implicitly) in the argument for hardness of exact l_2-Furthest Pair (and other important problems in computational geometry) in poly-log-log dimensions in [Williams, SODA 2018]. We have three main results regarding this problem.- Characterization of Multiplicative Approximation. First, we study the best multiplicative approximation ratio for Boolean Max-IP in sub-quadratic time. We show that, for Max-IP with two sets of n vectors from {0,1}^{d}, there is an n^{2 - Omega(1)} time (d/log n)^{Omega(1)}-multiplicative-approximating algorithm, and we show this is conditionally optimal, as such a (d/log n)^{o(1)}-approximating algorithm would refute SETH. Similar characterization is also achieved for additive approximation for Max-IP.- 2^{O(log^* n)}-dimensional Hardness for Exact Max-IP Over The Integers. Second, we revisit the hardness of solving Max-IP exactly for vectors with integer entries. We show that, under SETH, for Max-IP with sets of n vectors from Z^{d} for some d = 2^{O(log^* n)}, every exact algorithm requires n^{2 - o(1)} time. With the reduction from [Williams, SODA 2018], it follows that l_2-Furthest Pair and Bichromatic l_2-Closest Pair in 2^{O(log^* n)} dimensions require n^{2 - o(1)} time.- Connection with NP * UPP Communication Protocols. Last, We establish a connection between conditional lower bounds for exact Max-IP with integer entries and NP * UPP communication protocols for Set-Disjointness, parallel to the connection between conditional lower bounds for approximating Max-IP and MA communication protocols for Set-Disjointness.The lower bound in our first result is a direct corollary of the new MA protocol for Set-Disjointness introduced in [Rubinstein, STOC 2018], and our algorithms utilize the polynomial method and simple random sampling. Our second result follows from a new dimensionality self reduction from the Orthogonal Vectors problem for n vectors from {0,1}^{d} to n vectors from Z^{l} where l = 2^{O(log^* d)}, dramatically improving the previous reduction in [Williams, SODA 2018]. The key technical ingredient is a recursive application of Chinese Remainder Theorem.As a side product, we obtain an MA communication protocol for Set-Disjointness with complexity O (sqrt{n log n log log n}), slightly improving the O (sqrt{n} log n) bound [Aaronson and Wigderson, TOCT 2009], and approaching the Omega(sqrt{n}) lower bound [Klauck, CCC 2003].Moreover, we show that (under SETH) one can apply the O(sqrt{n}) BQP communication protocol for Set-Disjointness to prove near-optimal hardness for approximation to Max-IP with vectors in {-1,1}^d. This answers a question from [Abboud et al., FOCS 2017] in the affirmative.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8875Hardness vs Randomness for Bounded Depth Arithmetic Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8876
In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5.This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree.The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8876Hardness Amplification for Non-Commutative Arithmetic Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8877
We show that proving mildly super-linear lower bounds on non-commutative arithmetic circuits implies exponential lower bounds on non-commutative circuits. That is, non-commutative circuit complexity is a threshold phenomenon: an apparently weak lower bound actually suffices to show the strongest lower bounds we could desire.This is part of a recent line of inquiry into why arithmetic circuit complexity, despite being a heavily restricted version of Boolean complexity, still cannot prove super-linear lower bounds on general devices. One can view our work as positive news (it suffices to prove weak lower bounds to get strong ones) or negative news (it is as hard to prove weak lower bounds as it is to prove strong ones). We leave it to the reader to determine their own level of optimism.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8877Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8878
Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8878Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8879
We prove a lower bound of Omega(n^2/log^2 n) on the size of any syntactically multilinear arithmetic circuit computing some explicit multilinear polynomial f(x_1, ..., x_n). Our approach expands and improves upon a result of Raz, Shpilka and Yehudayoff ([Ran Raz et al., 2008]), who proved a lower bound of Omega(n^{4/3}/log^2 n) for the same polynomial. Our improvement follows from an asymptotically optimal lower bound for a generalized version of Galvin's problem in extremal set theory.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8879Communication Complexity with Small Advantage
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8880
We study problems in randomized communication complexity when the protocol is only required to attain some small advantage over purely random guessing, i.e., it produces the correct output with probability at least epsilon greater than one over the codomain size of the function. Previously, Braverman and Moitra (STOC 2013) showed that the set-intersection function requires Theta(epsilon n) communication to achieve advantage epsilon. Building on this, we prove the same bound for several variants of set-intersection: (1) the classic "tribes" function obtained by composing with And (provided 1/epsilon is at most the width of the And), and (2) the variant where the sets are uniquely intersecting and the goal is to determine partial information about (say, certain bits of the index of) the intersecting coordinate.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8880Linear Sketching over F_2
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8881
We initiate a systematic study of linear sketching over F_2. For a given Boolean function treated as f : F_2^n -> F_2 a randomized F_2-sketch is a distribution M over d x n matrices with elements over F_2 such that Mx suffices for computing f(x) with high probability. Such sketches for d << n can be used to design small-space distributed and streaming algorithms.Motivated by these applications we study a connection between F_2-sketching and a two-player one-way communication game for the corresponding XOR-function. We conjecture that F_2-sketching is optimal for this communication game. Our results confirm this conjecture for multiple important classes of functions: 1) low-degree F_2-polynomials, 2) functions with sparse Fourier spectrum, 3) most symmetric functions, 4) recursive majority function. These results rely on a new structural theorem that shows that F_2-sketching is optimal (up to constant factors) for uniformly distributed inputs.Furthermore, we show that (non-uniform) streaming algorithms that have to process random updates over F_2 can be constructed as F_2-sketches for the uniform distribution. In contrast with the previous work of Li, Nguyen and Woodruff (STOC'14) who show an analogous result for linear sketches over integers in the adversarial setting our result does not require the stream length to be triply exponential in n and holds for streams of length O(n) constructed through uniformly random updates.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8881The Power of Natural Properties as Oracles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8882
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8882NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8883
The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds.The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [William J. Masek, 1979] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD_2. Our techniques extend to an NP-hardness result for MOD_m gates at the bottom layer under inputs from (Z / m Z)^n.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8883Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8884
Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8884Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8885
For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal.We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following:- Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n).- Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8885A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8886
We construct and analyze a pseudorandom generator for degree 2 boolean polynomial threshold functions. Random constructions achieve the optimal seed length of O(log n + log 1/epsilon), however the best known explicit construction of [Ilias Diakonikolas, 2010] uses a seed length of O(log n * epsilon^{-8}). In this work we give an explicit construction that uses a seed length of O(log n + (1/epsilon)^{o(1)}). Note that this improves the seed length substantially and that the dependence on the error epsilon is additive and only grows subpolynomially as opposed to the previously known multiplicative polynomial dependence.Our generator uses dimensionality reduction on a Nisan-Wigderson based pseudorandom generator given by Lu, Kabanets [Kabanets and Lu, 2018].Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8886A New Approach for Constructing Low-Error, Two-Source Extractors
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8887
Our main contribution in this paper is a new reduction from explicit two-source extractors for polynomially-small entropy rate and negligible error to explicit t-non-malleable extractors with seed-length that has a good dependence on t. Our reduction is based on the Chattopadhyay and Zuckerman framework (STOC 2016), and surprisingly we dispense with the use of resilient functions which appeared to be a major ingredient there and in follow-up works. The use of resilient functions posed a fundamental barrier towards achieving negligible error, and our new reduction circumvents this bottleneck.The parameters we require from t-non-malleable extractors for our reduction to work hold in a non-explicit construction, but currently it is not known how to explicitly construct such extractors. As a result we do not give an unconditional construction of an explicit low-error two-source extractor. Nonetheless, we believe our work gives a viable approach for solving the important problem of low-error two-source extractors. Furthermore, our work highlights an existing barrier in constructing low-error two-source extractors, and draws attention to the dependence of the parameter t in the seed-length of the non-malleable extractor. We hope this work would lead to further developments in explicit constructions of both non-malleable and two-source extractors.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8887Pseudorandom Generators from Polarizing Random Walks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8888
We propose a new framework for constructing pseudorandom generators for n-variate Boolean functions. It is based on two new notions. First, we introduce fractional pseudorandom generators, which are pseudorandom distributions taking values in [-1,1]^n. Next, we use a fractional pseudorandom generator as steps of a random walk in [-1,1]^n that converges to {-1,1}^n. We prove that this random walk converges fast (in time logarithmic in n) due to polarization. As an application, we construct pseudorandom generators for Boolean functions with bounded Fourier tails. We use this to obtain a pseudorandom generator for functions with sensitivity s, whose seed length is polynomial in s. Other examples include functions computed by branching programs of various sorts or by bounded depth circuits.Fri, 01 Jun 2018 17:05:06 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8888Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8826
Front Matter, Table of Contents, Preface, Conference OrganizationWed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8826Sampling-Based Motion Planning: From Intelligent CAD to Crowd Simulation to Protein Folding (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8827
Motion planning has application in robotics, animation, virtual prototyping and training, and even for seemingly unrelated tasks such as evaluating architectural plans or simulating protein folding. Surprisingly, sampling-based planning methods have proven effective on problems from all these domains. In this talk, we provide an overview of sampling-based planning and describe some variants developed in our group, including strategies suited for manipulation planning and for user interaction. For virtual prototyping, we show that in some cases a hybrid system incorporating both an automatic planner and haptic user input leads to superior results. For crowd simulation, we describe techniques for evacuation planning and for evaluating architectural designs. Finally, we describe our application of sampling-based motion planners to simulate molecular motions, such as protein and RNA folding.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8827Optimizing Society? Ensuring Fairness in Automated Decision-Making (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8828
Algorithms are increasingly used to make high-stakes decisions about people; who goes to jail, what neighborhoods police deploy to, and who should be hired for a job. But if we want these decisions to be fair, this means we must take societal notions of fairness and express them using the language of math. What is a fair decision and how can it be guaranteed?In this talk, we'll discuss recent work from the new and growing field of Fairness, Accountability, and Transparency. We will examine technical definitions of fairness and non-discrimination that have been proposed and their societal counterparts. We'll also discuss methods for ensuring that algorithms are making decisions as desired, from methods to audit black-box algorithms to white-box interpretability techniques. This important field necessitates societally informed and mathematically rigorous work; we'll discuss open problems in this light.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8828Robustness Meets Algorithms (Invited Talk)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8829
In every corner of machine learning and statistics, there is a need for estimators that work not just in an idealized model but even when their assumptions are violated. Unfortunately in high-dimensions, being provably robust and efficiently computable are often at odds with each other. In this talk, we give the first efficient algorithm for estimating the parameters of a high-dimensional Gaussian which is able to tolerate a constant fraction of corruptions that is independent of the dimension. Prior to our work, all known estimators either needed time exponential in the dimension to compute, or could tolerate only an inverse polynomial fraction of corruptions. Not only does our algorithm bridge the gap between robustness and algorithms, it turns out to be highly practical in a variety of settings.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8829Economical Delone Sets for Approximating Convex Bodies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8830
Convex bodies are ubiquitous in computational geometry and optimization theory. The high combinatorial complexity of multidimensional convex polytopes has motivated the development of algorithms and data structures for approximate representations. This paper demonstrates an intriguing connection between convex approximation and the classical concept of Delone sets from the theory of metric spaces. It shows that with the help of a classical structure from convexity theory, called a Macbeath region, it is possible to construct an epsilon-approximation of any convex body as the union of O(1/epsilon^{(d-1)/2}) ellipsoids, where the center points of these ellipsoids form a Delone set in the Hilbert metric associated with the convex body. Furthermore, a hierarchy of such approximations yields a data structure that answers epsilon-approximate polytope membership queries in O(log (1/epsilon)) time. This matches the best asymptotic results for this problem, by a data structure that both is simpler and arguably more elegant.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8830Computing Shortest Paths in the Plane with Removable Obstacles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8831
We consider the problem of computing a Euclidean shortest path in the presence of removable obstacles in the plane. In particular, we have a collection of pairwise-disjoint polygonal obstacles, each of which may be removed at some cost c_i > 0. Given a cost budget C > 0, and a pair of points s, t, which obstacles should be removed to minimize the path length from s to t in the remaining workspace? We show that this problem is NP-hard even if the obstacles are vertical line segments. Our main result is a fully-polynomial time approximation scheme (FPTAS) for the case of convex polygons. Specifically, we compute an (1 + epsilon)-approximate shortest path in time O({nh}/{epsilon^2} log n log n/epsilon) with removal cost at most (1+epsilon)C, where h is the number of obstacles, n is the total number of obstacle vertices, and epsilon in (0, 1) is a user-specified parameter. Our approximation scheme also solves a shortest path problem for a stochastic model of obstacles, where each obstacle's presence is an independent event with a known probability. Finally, we also present a data structure that can answer s-t path queries in polylogarithmic time, for any pair of points s, t in the plane.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8831On Romeo and Juliet Problems: Minimizing Distance-to-Sight
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8832
We introduce a variant of the watchman route problem, which we call the quickest pair-visibility problem. Given two persons standing at points s and t in a simple polygon P with no holes, we want to minimize the distance these persons travel in order to see each other in P. We solve two variants of this problem, one minimizing the longer distance the two persons travel (min-max) and one minimizing the total travel distance (min-sum), optimally in linear time. We also consider a query version of this problem for the min-max variant. We can preprocess a simple n-gon in linear time so that the minimum of the longer distance the two persons travel can be computed in O(log^2 n) time for any two query positions where the two persons lie.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8832Multistage Matchings
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8833
We consider a multistage version of the Perfect Matching problem which models the scenario where the costs of edges change over time and we seek to obtain a solution that achieves low total cost, while minimizing the number of changes from one instance to the next. Formally, we are given a sequence of edge-weighted graphs on the same set of vertices V, and are asked to produce a perfect matching in each instance so that the total edge cost plus the transition cost (the cost of exchanging edges), is minimized. This model was introduced by Gupta et al. (ICALP 2014), who posed as an open problem its approximability for bipartite instances. We completely resolve this question by showing that Minimum Multistage Perfect Matching (Min-MPM) does not admit an n^{1-epsilon}-approximation, even on bipartite instances with only two time steps.Motivated by this negative result, we go on to consider two variations of the problem. In Metric Minimum Multistage Perfect Matching problem (Metric-Min-MPM) we are promised that edge weights in each time step satisfy the triangle inequality. We show that this problem admits a 3-approximation when the number of time steps is 2 or 3. On the other hand, we show that even the metric case is APX-hard already for 2 time steps. We then consider the complementary maximization version of the problem, Maximum Multistage Perfect Matching problem (Max-MPM), where we seek to maximize the total profit of all selected edges plus the total number of non-exchanged edges. We show that Max-MPM is also APX-hard, but admits a constant factor approximation algorithm for any number of time steps.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8833Convex Hulls in Polygonal Domains
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8834
We study generalizations of convex hulls to polygonal domains with holes. Convexity in Euclidean space is based on the notion of shortest paths, which are straight-line segments. In a polygonal domain, shortest paths are polygonal paths called geodesics. One possible generalization of convex hulls is based on the "rubber band" conception of the convex hull boundary as a shortest curve that encloses a given set of sites. However, it is NP-hard to compute such a curve in a general polygonal domain. Hence, we focus on a different, more direct generalization of convexity, where a set X is geodesically convex if it contains all geodesics between every pair of points x,y in X. The corresponding geodesic convex hull presents a few surprises, and turns out to behave quite differently compared to the classic Euclidean setting or to the geodesic hull inside a simple polygon. We describe a class of geometric objects that suffice to represent geodesic convex hulls of sets of sites, and characterize which such domains are geodesically convex. Using such a representation we present an algorithm to construct the geodesic convex hull of a set of O(n) sites in a polygonal domain with a total of n vertices and h holes in O(n^3h^{3+epsilon}) time, for any constant epsilon > 0.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8834Tree Containment With Soft Polytomies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8835
The Tree Containment problem has many important applications in the study of evolutionary history. Given a phylogenetic network N and a phylogenetic tree T whose leaves are labeled by a set of taxa, it asks if N and T are consistent. While the case of binary N and T has received considerable attention, the more practically relevant variant dealing with biological uncertainty has not. Such uncertainty manifests itself as high-degree vertices ("polytomies") that are "jokers" in the sense that they are compatible with any binary resolution of their children. Contrasting the binary case, we show that this problem, called Soft Tree Containment, is NP-hard, even if N is a binary, multi-labeled tree in which each taxon occurs at most thrice. On the other hand, we reduce the case that each label occurs at most twice to solving a 2-SAT instance of size O(|T|^3). This implies NP-hardness and polynomial-time solvability on reticulation-visible networks in which the maximum in-degree is bounded by three and two, respectively.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8835On the Size of Outer-String Representations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8836
Outer-string graphs, i.e., graphs that can be represented as intersection of curves in 2D, all of which end in the outer-face, have recently received much interest, especially since it was shown that the independent set problem can be solved efficiently in such graphs. However, the run-time for the independent set problem depends on N, the number of segments in an outer-string representation, rather than the number n of vertices of the graph. In this paper, we argue that for some outer-string graphs, N must be exponential in n. We also study some special string graphs, viz. monotone string graphs, and argue that for them N can be assumed to be polynomial in n. Finally we give an algorithm for independent set in so-called strip-grounded monotone outer-string graphs that is polynomial in n.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8836Flip Distance to some Plane Configurations
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8837
We study an old geometric optimization problem in the plane. Given a perfect matching M on a set of n points in the plane, we can transform it to a non-crossing perfect matching by a finite sequence of flip operations. The flip operation removes two crossing edges from M and adds two non-crossing edges. Let f(M) and F(M) denote the minimum and maximum lengths of a flip sequence on M, respectively. It has been proved by Bonnet and Miltzow (2016) that f(M)=O(n^2) and by van Leeuwen and Schoone (1980) that F(M)=O(n^3). We prove that f(M)=O(n Delta) where Delta is the spread of the point set, which is defined as the ratio between the longest and the shortest pairwise distances. This improves the previous bound for point sets with sublinear spread. For a matching M on n points in convex position we prove that f(M)=n/2-1 and F(M)={{n/2} choose 2}; these bounds are tight.Any bound on F(*) carries over to the bichromatic setting, while this is not necessarily true for f(*). Let M' be a bichromatic matching. The best known upper bound for f(M') is the same as for F(M'), which is essentially O(n^3). We prove that f(M')<=slant n-2 for points in convex position, and f(M')= O(n^2) for semi-collinear points.The flip operation can also be defined on spanning trees. For a spanning tree T on a convex point set we show that f(T)=O(n log n).Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8837Boundary Labeling for Rectangular Diagrams
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8838
Given a set of n points (sites) inside a rectangle R and n points (label locations or ports) on its boundary, a boundary labeling problem seeks ways of connecting every site to a distinct port while achieving different labeling aesthetics. We examine the scenario when the connecting lines (leaders) are drawn as axis-aligned polylines with few bends, every leader lies strictly inside R, no two leaders cross, and the sum of the lengths of all the leaders is minimized. In a k-sided boundary labeling problem, where 1 <= k <= 4, the label locations are located on the k consecutive sides of R.In this paper we develop an O(n^3 log n)-time algorithm for 2-sided boundary labeling, where the leaders are restricted to have one bend. This improves the previously best known O(n^8 log n)-time algorithm of Kindermann et al. (Algorithmica, 76(1):225-258, 2016). We show the problem is polynomial-time solvable in more general settings such as when the ports are located on more than two sides of R, in the presence of obstacles, and even when the objective is to minimize the total number of bends. Our results improve the previous algorithms on boundary labeling with obstacles, as well as provide the first polynomial-time algorithms for minimizing the total leader length and number of bends for 3- and 4-sided boundary labeling. These results settle a number of open questions on the boundary labeling problems (Wolff, Handbook of Graph Drawing, Chapter 23, Table 23.1, 2014).Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8838Gathering by Repulsion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8839
We consider a repulsion actuator located in an n-sided convex environment full of point particles. When the actuator is activated, all the particles move away from the actuator. We study the problem of gathering all the particles to a point. We give an O(n^2) time algorithm to compute all the actuator locations that gather the particles to one point with one activation, and an O(n) time algorithm to find a single such actuator location if one exists. We then provide an O(n) time algorithm to place the optimal number of actuators whose sequential activation results in the gathering of the particles when such a placement exists.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8839Improved Bounds for Guarding Plane Graphs with Edges
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8840
An edge guard set of a plane graph G is a subset Gamma of edges of G such that each face of G is incident to an endpoint of an edge in Gamma. Such a set is said to guard G. We improve the known upper bounds on the number of edges required to guard any n-vertex embedded planar graph G:1) We present a simple inductive proof for a theorem of Everett and Rivera-Campo (1997) that G can be guarded with at most 2n/5 edges, then extend this approach with a deeper analysis to yield an improved bound of 3n/8 edges for any plane graph.2) We prove that there exists an edge guard set of G with at most n/(3) + alpha/9 edges, where alpha is the number of quadrilateral faces in G. This improves the previous bound of n/(3) + alpha by Bose, Kirkpatrick, and Li (2003). Moreover, if there is no short path between any two quadrilateral faces in G, we show that n/(3) edges suffice, removing the dependence on alpha.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8840Sparse Weight Tolerant Subgraph for Single Source Shortest Path
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8841
In this paper we address the problem of computing a sparse subgraph of any weighted directed graph such that the exact distances from a designated source vertex to all other vertices are preserved under bounded weight increment. Finding a small sized subgraph that preserves distances between any pair of vertices is a well studied problem. Since in the real world any network is prone to failures, it is natural to study the fault tolerant version of the above problem. Unfortunately, it turns out that there may not always exist such a sparse subgraph even under single edge failure [Demetrescu et al. '08]. However in real applications it is not always the case that a link (edge) in a network becomes completely faulty. Instead, it can happen that some links become more congested which can be captured by increasing weight on the corresponding edges. Thus it makes sense to try to construct a sparse distance preserving subgraph under the above weight increment model where total increase in weight in the whole network (graph) is bounded by some parameter k. To the best of our knowledge this problem has not been studied so far.In this paper we show that given any weighted directed graph with n vertices and a source vertex, one can construct a subgraph of size at most e * (k-1)!2^kn such that it preserves distances between the source and all other vertices as long as the total weight increment is bounded by k and we are allowed to only have integer valued (can be negative) weight on edges and also weight of an edge can only be increased by some positive integer. Next we show a lower bound of c * 2^kn, for some constant c >= 5/4, on the size of such a subgraph. We further argue that the restrictions of integral weight and integral weight increment are actually essential by showing that if we remove any one of these two we may need to store Omega(n^2) edges to preserve the distances.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8841An Improved Algorithm for Incremental DFS Tree in Undirected Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8842
Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph G=(V,E) with n vertices and m edges, the textbook algorithm takes O(n+m) time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show:Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in O(n) worst case time per operation, and requires O (min {m log n, n^2}) preprocessing time.Our result improves the previous O(n log^3 n) worst case update time algorithm by Baswana et al. [Baswana et al., 2016] and the O(n log n) time by Nakamura and Sadakane [Nakamura and Sadakane, 2017], and matches the trivial Omega(n) lower bound when it is required to explicitly output a DFS tree.Our result builds on the framework introduced in the breakthrough work by Baswana et al. [Baswana et al., 2016], together with a novel use of a tree-partition lemma by Duan and Zhang [Duan and Zhang, 2016], and the celebrated fractional cascading technique by Chazelle and Guibas [Chazelle and Guibas, 1986a; Chazelle and Guibas, 1986b].Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8842Succinct Dynamic One-Dimensional Point Reporting
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8843
In this paper we present a succinct data structure for the dynamic one-dimensional range reporting problem. Given an interval [a,b] for some a,b in [m], the range reporting query on an integer set S subseteq [m] asks for all points in S cap [a,b]. We describe a data structure that answers reporting queries in optimal O(k+1) time, where k is the number of points in the answer, and supports updates in O(lg^epsilon m) expected time. Our data structure uses B(n,m) + o(B(n,m)) bits where B(n,m) is the minimum number of bits required to represent a set of size n from a universe of m elements. This is the first dynamic data structure for this problem that uses succinct space and achieves optimal query time.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8843Enumerating Vertices of 0/1-Polyhedra associated with 0/1-Totally Unimodular Matrices
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8844
We give an incremental polynomial time algorithm for enumerating the vertices of any polyhedron P=P(A,1_)={x in R^n | Ax >= 1_, x >= 0_}, when A is a totally unimodular matrix. Our algorithm is based on decomposing the hypergraph transversal problem for unimodular hypergraphs using Seymour's decomposition of totally unimodular matrices, and may be of independent interest.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8844The Parameterized Hardness of the k-Center Problem in Transportation Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8845
In this paper we study the hardness of the k-Center problem on inputs that model transportation networks. For the problem, an edge-weighted graph G=(V,E) and an integer k are given and a center set C subseteq V needs to be chosen such that |C|<= k. The aim is to minimize the maximum distance of any vertex in the graph to the closest center. This problem arises in many applications of logistics, and thus it is natural to consider inputs that model transportation networks. Such inputs are often assumed to be planar graphs, low doubling metrics, or bounded highway dimension graphs. For each of these models, parameterized approximation algorithms have been shown to exist. We complement these results by proving that the k-Center problem is W[1]-hard on planar graphs of constant doubling dimension, where the parameter is the combination of the number of centers k, the highway dimension h, and even the treewidth t. Moreover, under the Exponential Time Hypothesis there is no f(k,t,h)* n^{o(t+sqrt{k+h})} time algorithm for any computable function f. Thus it is unlikely that the optimum solution to k-Center can be found efficiently, even when assuming that the input graph abides to all of the above models for transportation networks at once!Additionally we give a simple parameterized (1+{epsilon})-approximation algorithm for inputs of doubling dimension d with runtime (k^k/{epsilon}^{O(kd)})* n^{O(1)}. This generalizes a previous result, which considered inputs in D-dimensional L_q metrics.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8845
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8846
Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8846Partial Complementation of Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8847
A partial complement of the graph G is a graph obtained from G by complementing all the edges in one of its induced subgraphs. We study the following algorithmic question: for a given graph G and graph class G, is there a partial complement of G which is in G? We show that this problem can be solved in polynomial time for various choices of the graphs class G, such as bipartite, degenerate, or cographs. We complement these results by proving that the problem is NP-complete when G is the class of r-regular graphs.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8847New Algorithms for Distributed Sliding Windows
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8848
Computing functions over a distributed stream of data is a significant problem with practical applications. The distributed streaming model is a natural computational model to deal with such scenarios. The goal in this model is to maintain an approximate value of a function of interest over a data stream distributed across several computational nodes. These computational nodes have a two-way communication channel with a coordinator node that maintains an approximation of the function over the entire data stream seen so far. The resources of interest, which need to be minimized, are communication (primary), space, and update time. A practical variant of this model is that of distributed sliding window (dsw), where the computation is limited to the last W items, where W is the window size. Important problems such as sampling and counting have been investigated in this model. However, certain problems including computing frequency moments and metric clustering, that are well studied in other streaming models, have not been considered in the distributed sliding window model.We give the first algorithms for computing the frequency moments and metric clustering problems in the distributed sliding window model. Our algorithms for these problems are a result of a general transfer theorem we establish that transforms any algorithm in the distributed infinite window model to an algorithm in the distributed sliding window model, for a large class of functions. In particular, we show an efficient adaptation of the smooth histogram technique of Braverman and Ostrovsky, to the distributed streaming model. Our construction allows trade-offs between communication and space. If we optimize for communication, we get algorithms that are as communication efficient as their infinite window counter parts (upto polylogarithmic factors).Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8848Parameterized Aspects of Strong Subgraph Closure
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8849
Motivated by the role of triadic closures in social networks, and the importance of finding a maximum subgraph avoiding a fixed pattern, we introduce and initiate the parameterized study of the Strong F-closure problem, where F is a fixed graph. This is a generalization of Strong Triadic Closure, whereas it is a relaxation of F-free Edge Deletion. In Strong F-closure, we want to select a maximum number of edges of the input graph G, and mark them as strong edges, in the following way: whenever a subset of the strong edges forms a subgraph isomorphic to F, then the corresponding induced subgraph of G is not isomorphic to F. Hence the subgraph of G defined by the strong edges is not necessarily F-free, but whenever it contains a copy of F, there are additional edges in G to destroy that strong copy of F in G.We study Strong F-closure from a parameterized perspective with various natural parameterizations. Our main focus is on the number k of strong edges as the parameter. We show that the problem is FPT with this parameterization for every fixed graph F, whereas it does not admit a polynomial kernel even when F =P_3. In fact, this latter case is equivalent to the Strong Triadic Closure problem, which motivates us to study this problem on input graphs belonging to well known graph classes. We show that Strong Triadic Closure does not admit a polynomial kernel even when the input graph is a split graph, whereas it admits a polynomial kernel when the input graph is planar, and even d-degenerate. Furthermore, on graphs of maximum degree at most 4, we show that Strong Triadic Closure is FPT with the above guarantee parameterization k - mu(G), where mu(G) is the maximum matching size of G. We conclude with some results on the parameterization of Strong F-closure by the number of edges of G that are not selected as strong.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8849Parameterized Orientable Deletion
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8850
A graph is d-orientable if its edges can be oriented so that the maximum in-degree of the resulting digraph is at most d. d-orientability is a well-studied concept with close connections to fundamental graph-theoretic notions and applications as a load balancing problem. In this paper we consider the d-Orientable Deletion problem: given a graph G=(V,E), delete the minimum number of vertices to make G d-orientable. We contribute a number of results that improve the state of the art on this problem. Specifically:- We show that the problem is W[2]-hard and log n-inapproximable with respect to k, the number of deleted vertices. This closes the gap in the problem's approximability.- We completely characterize the parameterized complexity of the problem on chordal graphs: it is FPT parameterized by d+k, but W-hard for each of the parameters d,k separately.- We show that, under the SETH, for all d,epsilon, the problem does not admit a (d+2-epsilon)^{tw}, algorithm where tw is the graph's treewidth, resolving as a special case an open problem on the complexity of PseudoForest Deletion.- We show that the problem is W-hard parameterized by the input graph's clique-width. Complementing this, we provide an algorithm running in time d^{O(d * cw)}, showing that the problem is FPT by d+cw, and improving the previously best know algorithm for this case.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8850SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8851
Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8851Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8852
We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any O(n)-size shortcut set cannot bring the diameter below Omega(n^{1/6}), and that any O(m)-size shortcut set cannot bring it below Omega(n^{1/11}). These improve Hesse's [Hesse, 2003] lower bound of Omega(n^{1/17}). By combining these constructions with Abboud and Bodwin's [Abboud and Bodwin, 2017] edge-splitting technique, we get additive stretch lower bounds of +Omega(n^{1/13}) for O(n)-size spanners and +Omega(n^{1/18}) for O(n)-size emulators. These improve Abboud and Bodwin's +Omega(n^{1/22}) lower bounds.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8852Reconfiguration of Colorable Sets in Classes of Perfect Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8853
A set of vertices in a graph is c-colorable if the subgraph induced by the set has a proper c-coloring. In this paper, we study the problem of finding a step-by-step transformation (reconfiguration) between two c-colorable sets in the same graph. This problem generalizes the well-studied Independent Set Reconfiguration problem. As the first step toward a systematic understanding of the complexity of this general problem, we study the problem on classes of perfect graphs. We first focus on interval graphs and give a combinatorial characterization of the distance between two c-colorable sets. This gives a linear-time algorithm for finding an actual shortest reconfiguration sequence for interval graphs. Since interval graphs are exactly the graphs that are simultaneously chordal and co-comparability, we then complement the positive result by showing that even deciding reachability is PSPACE-complete for chordal graphs and for co-comparability graphs. The hardness for chordal graphs holds even for split graphs. We also consider the case where c is a fixed constant and show that in such a case the reachability problem is polynomial-time solvable for split graphs but still PSPACE-complete for co-comparability graphs. The complexity of this case for chordal graphs remains unsettled. As by-products, our positive results give the first polynomial-time solvable cases (split graphs and interval graphs) for Feedback Vertex Set Reconfiguration.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8853Tight Lower Bounds for List Edge Coloring
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8854
The fastest algorithms for edge coloring run in time 2^m n^{O(1)}, where m and n are the number of edges and vertices of the input graph, respectively. For dense graphs, this bound becomes 2^{Theta(n^2)}. This is a somewhat unique situation, since most of the studied graph problems admit algorithms running in time 2^{O(n log n)}. It is a notorious open problem to either show an algorithm for edge coloring running in time 2^{o(n^2)} or to refute it, assuming the Exponential Time Hypothesis (ETH) or other well established assumptions.We notice that the same question can be asked for list edge coloring, a well-studied generalization of edge coloring where every edge comes with a set (often called a list) of allowed colors. Our main result states that list edge coloring for simple graphs does not admit an algorithm running in time 2^{o(n^2)}, unless ETH fails. Interestingly, the algorithm for edge coloring running in time 2^m n^{O(1)} generalizes to the list version without any asymptotic slow-down. Thus, our lower bound is essentially tight. This also means that in order to design an algorithm running in time 2^{o(n^2)} for edge coloring, one has to exploit its special features compared to the list version.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8854Load Thresholds for Cuckoo Hashing with Double Hashing
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8855
In k-ary cuckoo hashing, each of cn objects is associated with k random buckets in a hash table of size n. An l-orientation is an assignment of objects to associated buckets such that each bucket receives at most l objects. Several works have determined load thresholds c^* = c^*(k,l) for k-ary cuckoo hashing; that is, for c < c^* an l-orientation exists with high probability, and for c > c^* no l-orientation exists with high probability.A natural variant of k-ary cuckoo hashing utilizes double hashing, where, when the buckets are numbered 0,1,...,n-1, the k choices of random buckets form an arithmetic progression modulo n. Double hashing simplifies implementation and requires less randomness, and it has been shown that double hashing has the same behavior as fully random hashing in several other data structures that similarly use multiple hashes for each object. Interestingly, previous work has come close to but has not fully shown that the load threshold for k-ary cuckoo hashing is the same when using double hashing as when using fully random hashing. Specifically, previous work has shown that the thresholds for both settings coincide, except that for double hashing it was possible that o(n) objects would have been left unplaced. Here we close this open question by showing the thresholds are indeed the same, by providing a combinatorial argument that reconciles this stubborn difference.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8855A Greedy Algorithm for Subspace Approximation Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8856
In the subspace approximation problem, given m points in R^{n} and an integer k <= n, the goal is to find a k-dimension subspace of R^{n} that minimizes the l_{p}-norm of the Euclidean distances to the given points. This problem generalizes several subspace approximation problems and has applications from statistics, machine learning, signal processing to biology. Deshpande et al. [Deshpande et al., 2011] gave a randomized O(sqrt{p})-approximation and this bound is proved to be tight assuming NP != P by Guruswami et al. [Guruswami et al., 2016]. It is an intriguing question of determining the performance guarantee of deterministic algorithms for the problem. In this paper, we present a simple deterministic O(sqrt{p})-approximation algorithm with also a simple analysis. That definitely settles the status of the problem in term of approximation up to a constant factor. Besides, the simplicity of the algorithm makes it practically appealing.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8856Planar 3-SAT with a Clause/Variable Cycle
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8857
Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8857Tree-Residue Vertex-Breaking: a new tool for proving hardness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8858
In this paper, we introduce a new problem called Tree-Residue Vertex-Breaking (TRVB): given a multigraph G some of whose vertices are marked "breakable," is it possible to convert G into a tree via a sequence of "vertex-breaking" operations (replacing a degree-k breakable vertex by k degree-1 vertices, disconnecting the k incident edges)?We characterize the computational complexity of TRVB with any combination of the following additional constraints: G must be planar, G must be a simple graph, the degree of every breakable vertex must belong to an allowed list B, and the degree of every unbreakable vertex must belong to an allowed list U. The two results which we expect to be most generally applicable are that (1) TRVB is polynomially solvable when breakable vertices are restricted to have degree at most 3; and (2) for any k >= 4, TRVB is NP-complete when the given multigraph is restricted to be planar and to consist entirely of degree-k breakable vertices. To demonstrate the use of TRVB, we give a simple proof of the known result that Hamiltonicity in max-degree-3 square grid graphs is NP-hard.We also demonstrate a connection between TRVB and the Hypergraph Spanning Tree problem. This connection allows us to show that the Hypergraph Spanning Tree problem in k-uniform 2-regular hypergraphs is NP-complete for any k >= 4, even when the incidence graph of the hypergraph is planar.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8858Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8859
Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time T_{op}(n,m) can be transformed into a fully retroactive data structure with operation time O(sqrt{m} * T_{op}(n,m)), where n is the size of the data structure and m is the number of operations in the timeline [Demaine et al., 2004]. But it has been open for 14 years whether such a gap is necessary.In this paper, we prove nearly matching upper and lower bounds on this gap for all n and m. We improve the upper bound for n << sqrt m by showing a new transformation with multiplicative overhead n log m. We then prove a lower bound of min {n log m, sqrt m}^{1-o(1)} assuming any of the following conjectures:- Conjecture I: Circuit SAT requires 2^{n - o(n)} time on n-input circuits of size 2^{o(n)}. This conjecture is far weaker than the well-believed SETH conjecture from complexity theory, which asserts that CNF SAT with n variables and O(n) clauses already requires 2^{n-o(n)} time.- Conjecture II: Online (min,+) product between an integer n x n matrix and n vectors requires n^{3 - o(1)} time. This conjecture is weaker than the APSP conjectures widely used in fine-grained complexity.- Conjecture III (3-SUM Conjecture): Given three sets A,B,C of integers, each of size n, deciding whether there exist a in A, b in B, c in C such that a + b + c = 0 requires n^{2 - o(1)} time. This 1995 conjecture [Anka Gajentaan and Mark H. Overmars, 1995] was the first conjecture in fine-grained complexity.Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones.Wed, 30 May 2018 07:53:34 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8859Schloss Dagstuhl - Jahresbericht / Annual Report 2017
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8825
Tue, 29 May 2018 12:40:05 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8825Tensor Computing for Internet of Things (Dagstuhl Perspectives Workshop 16152)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8824
"The fundamental laws necessary for the mathematical treatment of large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved." - Dirac 1929The digital world of Internet of Things (IoT) will provide a high-resolution depiction of our physical world through measurements and other data - even high-definition "video," if you consider streaming data frames coming from a myriad of sensors embedded in everything we use. This depiction will have captured our interactions with the physical world and the interactions of digitally enhanced machines and devices. Tensors, as generalizations of vectors and matrices, provide a natural and scalable framework for handling data with such inherent structures and complex dependencies. Scalable tensor methods have attracted considerable amount of attention, with successes in a series of learning tasks, such as learning latent variable models, relational learning, spatio-temporal forecasting as well as training [19] and compression [20] of deep neural networks. In a Dagstuhl Perspectives Workshop on Tensor Computing for IoT, we validated the fundamental suitability of tensor methods for handling the massive amounts of data coming from connected cyber-physical systems (CPS). The multidisciplinary discourse among academics, industrial researchers and practitioners in the IoT/CPS domain and in the field of machine learning and tensor methods, exposed open issues that need to be addressed to reap value from the technological opportunity. This Manifesto summarizes the immediate action fields for advancement: IoT Tensor Data Benchmarks, Tensor Tools for IoT, and the evolution of a Knowledge Hub. The activities will also be channeled to create best practices and a common tensor language across the disciplines. In a not so distant future, basic infrastructures for living will be mainly data-driven, automated by digitally enhanced devices and machines. The tools and frameworks used to engineer such systems will ensure production-ready machine learning code which utilizes tensor-based, hence better interpretable, models and runs on distributed, decentralized, and embedded computing resources in a robust and reliable way. We conclude the manifesto with a strategy how to move towards this vision with concrete steps in the identified action fields.Tue, 29 May 2018 12:30:33 +0200http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8824