Dagstuhl Publishing
http://www.dagstuhl.de
RSS feed announcing the latest publications published by Schloss Dagstuhlen-en2010-06-082018-06-06http://www.dagstuhl.de/dagpub-rss.phpLIPIcs, Volume 115, IPEC'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10232
LIPIcs, Volume 115, IPEC'18, Complete VolumeWed, 06 Feb 2019 12:26:46 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10232OASIcs, Volume 67, PLATEAU'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10230
OASIcs, Volume 67, PLATEAU'18, Complete VolumeWed, 30 Jan 2019 09:51:55 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10230OASIcs, Volume 66, ICCSW'18, Complete Volume
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10229
OASIcs, Volume 66, ICCSW'18, Complete VolumeWed, 30 Jan 2019 08:13:25 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10229Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10201
Front Matter, Table of Contents, Preface, Conference OrganizationFri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10201Counting Problems in Parameterized Complexity
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10202
This survey is an invitation to parameterized counting problems for readers with a background in parameterized algorithms and complexity. After an introduction to the peculiarities of counting complexity, we survey the parameterized approach to counting problems, with a focus on two topics of recent interest: Counting small patterns in large graphs, and counting perfect matchings and Hamiltonian cycles in well-structured graphs.While this survey presupposes familiarity with parameterized algorithms and complexity, we aim at explaining all relevant notions from counting complexity in a self-contained way.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10202A Complexity Dichotomy for Hitting Small Planar Minors Parameterized by Treewidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10203
For a fixed graph H, we are interested in the parameterized complexity of the following problem, called {H}-M-Deletion, parameterized by the treewidth tw of the input graph: given an n-vertex graph G and an integer k, decide whether there exists S subseteq V(G) with |S| <= k such that G setminus S does not contain H as a minor. In previous work [IPEC, 2017] we proved that if H is planar and connected, then the problem cannot be solved in time 2^{o(tw)} * n^{O(1)} under the ETH, and can be solved in time 2^{O(tw * log tw)} * n^{O(1)}. In this article we manage to classify the optimal asymptotic complexity of {H}-M-Deletion when H is a connected planar graph on at most 5 vertices. Out of the 29 possibilities (discarding the trivial case H = K_1), we prove that 9 of them are solvable in time 2^{Theta (tw)} * n^{O(1)}, and that the other 20 ones are solvable in time 2^{Theta (tw * log tw)} * n^{O(1)}. Namely, we prove that K_4 and the diamond are the only graphs on at most 4 vertices for which the problem is solvable in time 2^{Theta (tw * log tw)} * n^{O(1)}, and that the chair and the banner are the only graphs on 5 vertices for which the problem is solvable in time 2^{Theta (tw)} * n^{O(1)}. For the version of the problem where H is forbidden as a topological minor, the case H = K_{1,4} can be solved in time 2^{Theta (tw)} * n^{O(1)}. This exhibits, to the best of our knowledge, the first difference between the computational complexity of both problems.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10203Lower Bounds for Dynamic Programming on Planar Graphs of Bounded Cutwidth
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10204
Many combinatorial problems can be solved in time O^*(c^{tw}) on graphs of treewidth tw, for a problem-specific constant c. In several cases, matching upper and lower bounds on c are known based on the Strong Exponential Time Hypothesis (SETH). In this paper we investigate the complexity of solving problems on graphs of bounded cutwidth, a graph parameter that takes larger values than treewidth. We strengthen earlier treewidth-based lower bounds to show that, assuming SETH, Independent Set cannot be solved in O^*((2-epsilon)^{ctw}) time, and Dominating Set cannot be solved in O^*((3-epsilon)^{ctw}) time. By designing a new crossover gadget, we extend these lower bounds even to planar graphs of bounded cutwidth or treewidth. Hence planarity does not help when solving Independent Set or Dominating Set on graphs of bounded width. This sharply contrasts the fact that in many settings, planarity allows problems to be solved much more efficiently.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10204Multivariate Analysis of Orthogonal Range Searching and Graph Distances
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10205
We show that the eccentricities, diameter, radius, and Wiener index of an undirected n-vertex graph with nonnegative edge lengths can be computed in time O(n * binom{k+ceil[log n]}{k} * 2^k k^2 log n), where k is the treewidth of the graph. For every epsilon>0, this bound is n^{1+epsilon}exp O(k), which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form log^d n to binom{d+ceil[log n]}{d}, as originally observed by Monier (J. Alg. 1980).We also investigate the parameterization by vertex cover number.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10205A Faster Tree-Decomposition Based Algorithm for Counting Linear Extensions
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10206
We consider the problem of counting the linear extensions of an n-element poset whose cover graph has treewidth at most t. We show that the problem can be solved in time O~(n^{t+3}), where O~ suppresses logarithmic factors. Our algorithm is based on fast multiplication of multivariate polynomials, and so differs radically from a previous O~(n^{t+4})-time inclusion - exclusion algorithm. We also investigate the algorithm from a practical point of view. We observe that the running time is not well characterized by the parameters n and t alone, fixing of which leaves large variance in running times due to uncontrolled features of the selected optimal-width tree decomposition. For selecting an efficient tree decomposition we adopt the method of empirical hardness models, and show that it typically enables picking a tree decomposition that is significantly more efficient than a random optimal-width tree decomposition.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10206Generalized Distance Domination Problems and Their Complexity on Graphs of Bounded mim-width
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10207
We generalize the family of (sigma, rho)-problems and locally checkable vertex partition problems to their distance versions, which naturally captures well-known problems such as distance-r dominating set and distance-r independent set. We show that these distance problems are XP parameterized by the structural parameter mim-width, and hence polynomial on graph classes where mim-width is bounded and quickly computable, such as k-trapezoid graphs, Dilworth k-graphs, (circular) permutation graphs, interval graphs and their complements, convex graphs and their complements, k-polygon graphs, circular arc graphs, complements of d-degenerate graphs, and H-graphs if given an H-representation. To supplement these findings, we show that many classes of (distance) (sigma, rho)-problems are W[1]-hard parameterized by mim-width + solution size.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10207A Parameterized Complexity View on Collapsing k-Cores
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10208
We study the NP-hard graph problem Collapsed k-Core where, given an undirected graph G and integers b, x, and k, we are asked to remove b vertices such that the k-core of remaining graph, that is, the (uniquely determined) largest induced subgraph with minimum degree k, has size at most x. Collapsed k-Core was introduced by Zhang et al. [AAAI 2017] and it is motivated by the study of engagement behavior of users in a social network and measuring the resilience of a network against user drop outs. Collapsed k-Core is a generalization of r-Degenerate Vertex Deletion (which is known to be NP-hard for all r >=0) where, given an undirected graph G and integers b and r, we are asked to remove b vertices such that the remaining graph is r-degenerate, that is, every its subgraph has minimum degree at most r. We investigate the parameterized complexity of Collapsed k-Core with respect to the parameters b, x, and k, and several structural parameters of the input graph. We reveal a dichotomy in the computational complexity of Collapsed k-Core for k <=2 and k >= 3. For the latter case it is known that for all x >= 0 Collapsed k-Core is W[P]-hard when parameterized by b. We show that Collapsed k-Core is W[1]-hard when parameterized by b and in FPT when parameterized by (b+x) if k <=2. Furthermore, we show that Collapsed k-Core is in FPT when parameterized by the treewidth of the input graph and presumably does not admit a polynomial kernel when parameterized by the vertex cover number of the input graph.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10208Parameterized Complexity of Multi-Node Hubs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10209
Hubs are high-degree nodes within a network. The examination of the emergence and centrality of hubs lies at the heart of many studies of complex networks such as telecommunication networks, biological networks, social networks and semantic networks. Furthermore, identifying and allocating hubs are routine tasks in applications. In this paper, we do not seek a hub that is a single node, but a hub that consists of k nodes. Formally, given a graph G=(V,E), we a seek a set A subseteq V of size k that induces a connected subgraph from which at least p edges emanate. Thus, we identify k nodes which can act as a unit (due to the connectivity constraint) that is a hub (due to the cut constraint). This problem, which we call Multi-Node Hub (MNH), can also be viewed as a variant of the classic Max Cut problem. While it is easy to see that MNH is W[1]-hard with respect to the parameter k, our main contribution is the first parameterized algorithm that shows that MNH is FPT with respect to the parameter p.Despite recent breakthrough advances for cut-problems like Multicut and Minimum Bisection, MNH is still very challenging. Not only does a connectivity constraint has to be handled on top of the involved machinery developed for these problems, but also the fact that MNH is a maximization problem seems to prevent the applicability of this machinery in the first place. To deal with the latter issue, we give non-trivial reduction rules that show how MNH can be preprocessed into a problem where it is necessary to delete a bounded-in-parameter number of vertices. Then, to handle the connectivity constraint, we use a novel application of the form of tree decomposition introduced by Cygan et al. [STOC 2014] to solve Minimum Bisection, where we demonstrate how connectivity constraints can be replaced by simpler size constraints. Our approach may be relevant to the design of algorithms for other cut-problems of this nature.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10209The Parameterised Complexity of Computing the Maximum Modularity of a Graph
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10210
The maximum modularity of a graph is a parameter widely used to describe the level of clustering or community structure in a network. Determining the maximum modularity of a graph is known to be NP-complete in general, and in practice a range of heuristics are used to construct partitions of the vertex-set which give lower bounds on the maximum modularity but without any guarantee on how close these bounds are to the true maximum. In this paper we investigate the parameterised complexity of determining the maximum modularity with respect to various standard structural parameterisations of the input graph G. We show that the problem belongs to FPT when parameterised by the size of a minimum vertex cover for G, and is solvable in polynomial time whenever the treewidth or max leaf number of G is bounded by some fixed constant; we also obtain an FPT algorithm, parameterised by treewidth, to compute any constant-factor approximation to the maximum modularity. On the other hand we show that the problem is W[1]-hard (and hence unlikely to admit an FPT algorithm) when parameterised simultaneously by pathwidth and the size of a minimum feedback vertex set.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10210On the Distance Identifying Set Meta-Problem and Applications to the Complexity of Identifying Problems on Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10211
Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10211The Parameterized Complexity of Finding Point Sets with Hereditary Properties
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10212
We consider problems where the input is a set of points in the plane and an integer k, and the task is to find a subset S of the input points of size k such that S satisfies some property. We focus on properties that depend only on the order type of the points and are monotone under point removals. We exhibit a property defined by three forbidden patterns for which finding a k-point subset with the property is W[1]-complete and (assuming the exponential time hypothesis) cannot be solved in time n^{o(k/log k)}. However, we show that problems of this type are fixed-parameter tractable for all properties that include all collinear point sets, properties that exclude at least one convex polygon, and properties defined by a single forbidden pattern.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10212Dual Parameterization of Weighted Coloring
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10213
Given a graph G, a proper k-coloring of G is a partition c = (S_i)_{i in [1,k]} of V(G) into k stable sets S_1,..., S_k. Given a weight function w: V(G) -> R^+, the weight of a color S_i is defined as w(i) = max_{v in S_i} w(v) and the weight of a coloring c as w(c) = sum_{i=1}^{k} w(i). Guan and Zhu [Inf. Process. Lett., 1997] defined the weighted chromatic number of a pair (G,w), denoted by sigma(G,w), as the minimum weight of a proper coloring of G. The problem of determining sigma(G,w) has received considerable attention during the last years, and has been proved to be notoriously hard: for instance, it is NP-hard on split graphs, unsolvable on n-vertex trees in time n^{o(log n)} unless the ETH fails, and W[1]-hard on forests parameterized by the size of a largest tree.We focus on the so-called dual parameterization of the problem: given a vertex-weighted graph (G,w) and an integer k, is sigma(G,w) <= sum_{v in V(G)} w(v) - k? This parameterization has been recently considered by Escoffier [WG, 2016], who provided an FPT algorithm running in time 2^{O(k log k)} * n^{O(1)}, and asked which kernel size can be achieved for the problem.We provide an FPT algorithm running in time 9^k * n^{O(1)}, and prove that no algorithm in time 2^{o(k)} * n^{O(1)} exists under the ETH. On the other hand, we present a kernel with at most (2^{k-1}+1) (k-1) vertices, and rule out the existence of polynomial kernels unless NP subseteq coNP/poly, even on split graphs with only two different weights. Finally, we identify some classes of graphs on which the problem admits a polynomial kernel, in particular interval graphs and subclasses of split graphs, and in the latter case we present lower bounds on the degrees of the polynomials.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10213Computing Kernels in Parallel: Lower and Upper Bounds
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10214
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC^0-circuits, a quadratic kernel by TC^0-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10214Exploring the Kernelization Borders for Hitting Cycles
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10215
A generalization of classical cycle hitting problems, called conflict version of the problem, is defined as follows. An input is undirected graphs G and H on the same vertex set, and a positive integer k, and the objective is to decide whether there exists a vertex subset X subseteq V(G) such that it intersects all desired "cycles" (all cycles or all odd cycles or all even cycles) and X is an independent set in H. In this paper we study the conflict version of classical Feedback Vertex Set, and Odd Cycle Transversal problems, from the view point of kernelization complexity. In particular, we obtain the following results, when the conflict graph H belongs to the family of d-degenerate graphs. 1) CF-FVS admits a O(k^{O(d)}) kernel.2) CF-OCT does not admit polynomial kernel (even when H is 1-degenerate), unless NP subseteq coNP/poly. For our kernelization algorithm we exploit ideas developed for designing polynomial kernels for the classical Feedback Vertex Set problem, as well as, devise new reduction rules that exploit degeneracy crucially. Our main conceptual contribution here is the notion of "k-independence preserver". Informally, it is a set of "important" vertices for a given subset X subseteq V(H), that is enough to capture the independent set property in H. We show that for d-degenerate graph independence preserver of size k^{O(d)} exists, and can be used in designing polynomial kernel.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10215Best-Case and Worst-Case Sparsifiability of Boolean CSPs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10216
We continue the investigation of polynomial-time sparsification for NP-complete Boolean Constraint Satisfaction Problems (CSPs). The goal in sparsification is to reduce the number of constraints in a problem instance without changing the answer, such that a bound on the number of resulting constraints can be given in terms of the number of variables n. We investigate how the worst-case sparsification size depends on the types of constraints allowed in the problem formulation (the constraint language). Two algorithmic results are presented. The first result essentially shows that for any arity k, the only constraint type for which no nontrivial sparsification is possible has exactly one falsifying assignment, and corresponds to logical OR (up to negations). Our second result concerns linear sparsification, that is, a reduction to an equivalent instance with O(n) constraints. Using linear algebra over rings of integers modulo prime powers, we give an elegant necessary and sufficient condition for a constraint type to be captured by a degree-1 polynomial over such a ring, which yields linear sparsifications. The combination of these algorithmic results allows us to prove two characterizations that capture the optimal sparsification sizes for a range of Boolean CSPs. For NP-complete Boolean CSPs whose constraints are symmetric (the satisfaction depends only on the number of 1 values in the assignment, not on their positions), we give a complete characterization of which constraint languages allow for a linear sparsification. For Boolean CSPs in which every constraint has arity at most three, we characterize the optimal size of sparsifications in terms of the largest OR that can be expressed by the constraint language.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10216Parameterized Leaf Power Recognition via Embedding into Graph Products
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10217
The k-leaf power graph G of a tree T is a graph whose vertices are the leaves of T and whose edges connect pairs of leaves at unweighted distance at most k in T. Recognition of the k-leaf power graphs for k >= 6 is still an open problem. In this paper, we provide an algorithm for this problem for sparse leaf power graphs. Our result shows that the problem of recognizing these graphs is fixed-parameter tractable when parameterized both by k and by the degeneracy of the given graph. To prove this, we describe how to embed the leaf root of a leaf power graph into a product of the graph with a cycle graph. We bound the treewidth of the resulting product in terms of k and the degeneracy of G. As a result, we can use methods based on monadic second-order logic (MSO_2) to recognize the existence of a leaf power as a subgraph of the product graph.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10217Parameterized Complexity of Independent Set in H-Free Graphs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10218
In this paper, we investigate the complexity of Maximum Independent Set (MIS) in the class of H-free graphs, that is, graphs excluding a fixed graph as an induced subgraph. Given that the problem remains NP-hard for most graphs H, we study its fixed-parameter tractability and make progress towards a dichotomy between FPT and W[1]-hard cases. We first show that MIS remains W[1]-hard in graphs forbidding simultaneously K_{1, 4}, any finite set of cycles of length at least 4, and any finite set of trees with at least two branching vertices. In particular, this answers an open question of Dabrowski et al. concerning C_4-free graphs. Then we extend the polynomial algorithm of Alekseev when H is a disjoint union of edges to an FPT algorithm when H is a disjoint union of cliques. We also provide a framework for solving several other cases, which is a generalization of the concept of iterative expansion accompanied by the extraction of a particular structure using Ramsey's theorem. Iterative expansion is a maximization version of the so-called iterative compression. We believe that our framework can be of independent interest for solving other similar graph problems. Finally, we present positive and negative results on the existence of polynomial (Turing) kernels for several graphs H.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10218Multi-Budgeted Directed Cuts
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10219
In this paper, we study multi-budgeted variants of the classic minimum cut problem and graph separation problems that turned out to be important in parameterized complexity: Skew Multicut and Directed Feedback Arc Set. In our generalization, we assign colors 1,2,...,l to some edges and give separate budgets k_1,k_2,...,k_l for colors 1,2,...,l. For every color i in {1,...,l}, let E_i be the set of edges of color i. The solution C for the multi-budgeted variant of a graph separation problem not only needs to satisfy the usual separation requirements (i.e., be a cut, a skew multicut, or a directed feedback arc set, respectively), but also needs to satisfy that |C cap E_i| <= k_i for every i in {1,...,l}.Contrary to the classic minimum cut problem, the multi-budgeted variant turns out to be NP-hard even for l = 2. We propose FPT algorithms parameterized by k=k_1 +...+ k_l for all three problems. To this end, we develop a branching procedure for the multi-budgeted minimum cut problem that measures the progress of the algorithm not by reducing k as usual, by but elevating the capacity of some edges and thus increasing the size of maximum source-to-sink flow. Using the fact that a similar strategy is used to enumerate all important separators of a given size, we merge this process with the flow-guided branching and show an FPT bound on the number of (appropriately defined) important multi-budgeted separators. This allows us to extend our algorithm to the Skew Multicut and Directed Feedback Arc Set problems.Furthermore, we show connections of the multi-budgeted variants with weighted variants of the directed cut problems and the Chain l-SAT problem, whose parameterized complexity remains an open problem. We show that these problems admit a bounded-in-parameter number of "maximally pushed" solutions (in a similar spirit as important separators are maximally pushed), giving somewhat weak evidence towards their tractability.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10219Matching Cut: Kernelization, Single-Exponential Time FPT, and Exact Exponential Algorithms
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10220
In a graph, a matching cut is an edge cut that is a matching. Matching Cut, which is known to be NP-complete, is the problem of deciding whether or not a given graph G has a matching cut. In this paper we show that Matching Cut admits a quadratic-vertex kernel for the parameter distance to cluster and a linear-vertex kernel for the parameter distance to clique. We further provide an O^*(2^{dc(G)}) time and an O^*(2^{dc^-}(G)}) time FPT algorithm for Matching Cut, where dc(G) and dc^-(G) are the distance to cluster and distance to co-cluster, respectively. We also improve the running time of the best known branching algorithm to solve Matching Cut from O^*(1.4143^n) to O^*(1.3803^n). Moreover, we point out that, unless NP subseteq coNP/poly, Matching Cut does not admit a polynomial kernel when parameterized by treewidth.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10220Subset Feedback Vertex Set on Graphs of Bounded Independent Set Size
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10221
The (Weighted) Subset Feedback Vertex Set problem is a generalization of the classical Feedback Vertex Set problem and asks for a vertex set of minimum (weight) size that intersects all cycles containing a vertex of a predescribed set of vertices. Although the two problems exhibit different computational complexity on split graphs, no similar characterization is known on other classes of graphs. Towards the understanding of the complexity difference between the two problems, it is natural to study the importance of a structural graph parameter. Here we consider graphs of bounded independent set number for which it is known that Weighted Feedback Vertex Set can be solved in polynomial time. We provide a dichotomy result with respect to the size of a maximum independent set. In particular we show that Weighted Subset Feedback Vertex Set can be solved in polynomial time for graphs of independent set number at most three, whereas we prove that the problem remains NP-hard for graphs of independent set number four. Moreover, we show that the (unweighted) Subset Feedback Vertex Set problem can be solved in polynomial time on graphs of bounded independent set number by giving an algorithm with running time n^{O(d)}, where d is the size of a maximum independent set of the input graph. To complement our results, we demonstrate how our ideas can be extended to other terminal set problems on graphs of bounded independent set size. Based on our findings for Subset Feedback Vertex Set, we settle the complexity of Node Multiway Cut, a terminal set problem that asks for a vertex set of minimum size that intersects all paths connecting any two terminals, as well as its variants where nodes are weighted and/or the terminals are deletable, for every value of the given independent set number.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10221Integer Programming in Parameterized Complexity: Three Miniatures
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10222
Powerful results from the theory of integer programming have recently led to substantial advances in parameterized complexity. However, our perception is that, except for Lenstra's algorithm for solving integer linear programming in fixed dimension, there is still little understanding in the parameterized complexity community of the strengths and limitations of the available tools. This is understandable: it is often difficult to infer exact runtimes or even the distinction between FPT and XP algorithms, and some knowledge is simply unwritten folklore in a different community. We wish to make a step in remedying this situation.To that end, we first provide an easy to navigate quick reference guide of integer programming algorithms from the perspective of parameterized complexity. Then, we show their applications in three case studies, obtaining FPT algorithms with runtime f(k) poly(n). We focus on: - Modeling: since the algorithmic results follow by applying existing algorithms to new models, we shift the focus from the complexity result to the modeling result, highlighting common patterns and tricks which are used. - Optimality program: after giving an FPT algorithm, we are interested in reducing the dependence on the parameter; we show which algorithms and tricks are often useful for speed-ups. - Minding the poly(n): reducing f(k) often has the unintended consequence of increasing poly(n); so we highlight the common trade-offs and show how to get the best of both worlds. Specifically, we consider graphs of bounded neighborhood diversity which are in a sense the simplest of dense graphs, and we show several FPT algorithms for Capacitated Dominating Set, Sum Coloring, and Max-q-Cut by modeling them as convex programs in fixed dimension, n-fold integer programs, bounded dual treewidth programs, and indefinite quadratic programs in fixed dimension.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10222Solving Target Set Selection with Bounded Thresholds Faster than 2^n
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10223
In this paper we consider the Target Set Selection problem. The problem naturally arises in many fields like economy, sociology, medicine. In the Target Set Selection problem one is given a graph G with a function thr: V(G) -> N cup {0} and integers k, l. The goal of the problem is to activate at most k vertices initially so that at the end of the activation process there is at least l activated vertices. The activation process occurs in the following way: (i) once activated, a vertex stays activated forever; (ii) vertex v becomes activated if at least thr(v) of its neighbours are activated. The problem and its different special cases were extensively studied from approximation and parameterized points of view. For example, parameterizations by the following parameters were studied: treewidth, feedback vertex set, diameter, size of target set, vertex cover, cluster editing number and others.Despite the extensive study of the problem it is still unknown whether the problem can be solved in O^*((2-epsilon)^n) time for some epsilon >0. We partially answer this question by presenting several faster-than-trivial algorithms that work in cases of constant thresholds, constant dual thresholds or when the threshold value of each vertex is bounded by one-third of its degree. Also, we show that the problem parameterized by l is W[1]-hard even when all thresholds are constant.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10223Resolving Conflicts for Lower-Bounded Clustering
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10224
This paper considers the effect of non-metric distances for lower-bounded clustering, i.e., the problem of computing a partition for a given set of objects with pairwise distance, such that each set has a certain minimum cardinality (as required for anonymisation or balanced facility location problems). We discuss lower-bounded clustering with the objective to minimise the maximum radius or diameter of the clusters. For these problems there exists a 2-approximation but only if the pairwise distance on the objects satisfies the triangle inequality, without this property no polynomial-time constant factor approximation is possible, unless P=NP. We try to resolve or at least soften this effect of non-metric distances by devising particular strategies to deal with violations of the triangle inequality (conflicts). With parameterised algorithmics, we find that if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently.In particular, we introduce parameterised approximations with respect to not just the number of conflicts but also for the vertex cover number of the conflict graph (graph induced by conflicts). Interestingly, we salvage the approximation ratio of 2 for diameter while for radius it is only possible to show a ratio of 3. For the parameter vertex cover number of the conflict graph this worsening in ratio is shown to be unavoidable, unless FPT=W[2]. We further discuss improvements for diameter by choosing the (induced) P_3-cover number of the conflict graph as parameter and complement these by showing that, unless FPT=W[1], there exists no constant factor parameterised approximation with respect to the parameter split vertex deletion set.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10224Counting Induced Subgraphs: A Topological Approach to #W[1]-hardness
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10225
We investigate the problem #{IndSub}(Phi) of counting all induced subgraphs of size k in a graph G that satisfy a given property Phi. This continues the work of Jerrum and Meeks who proved the problem to be #{W[1]}-hard for some families of properties which include, among others, (dis)connectedness [JCSS 15] and even- or oddness of the number of edges [Combinatorica 17]. Using the recent framework of graph motif parameters due to Curticapean, Dell and Marx [STOC 17], we discover that for monotone properties Phi, the problem #{IndSub}(Phi) is hard for #{W[1]} if the reduced Euler characteristic of the associated simplicial (graph) complex of Phi is non-zero. This observation links #{IndSub}(Phi) to Karp's famous Evasiveness Conjecture, as every graph complex with non-vanishing reduced Euler characteristic is known to be evasive. Applying tools from the "topological approach to evasiveness" which was introduced in the seminal paper of Khan, Saks and Sturtevant [FOCS 83], we prove that #{IndSub}(Phi) is #{W[1]}-hard for every monotone property Phi that does not hold on the Hamilton cycle as well as for some monotone properties that hold on the Hamilton cycle such as being triangle-free or not k-edge-connected for k > 2. Moreover, we show that for those properties #{IndSub}(Phi) can not be solved in time f(k)* n^{o(k)} for any computable function f unless the Exponential Time Hypothesis (ETH) fails. In the final part of the paper, we investigate non-monotone properties and prove that #{IndSub}(Phi) is #{W[1]}-hard if Phi is any non-trivial modularity constraint on the number of edges with respect to some prime q or if Phi enforces the presence of a fixed isolated subgraph.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10225A Strongly-Uniform Slicewise Polynomial-Time Algorithm for the Embedded Planar Diameter Improvement Problem
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10226
In the embedded planar diameter improvement problem (EPDI) we are given a graph G embedded in the plane and a positive integer d. The goal is to determine whether one can add edges to the planar embedding of G in such a way that planarity is preserved and in such a way that the resulting graph has diameter at most d. Using non-constructive techniques derived from Robertson and Seymour's graph minor theory, together with the effectivization by self-reduction technique introduced by Fellows and Langston, one can show that EPDI can be solved in time f(d)* |V(G)|^{O(1)} for some function f(d). The caveat is that this algorithm is not strongly uniform in the sense that the function f(d) is not known to be computable. On the other hand, even the problem of determining whether EPDI can be solved in time f_1(d)* |V(G)|^{f_2(d)} for computable functions f_1 and f_2 has been open for more than two decades [Cohen at. al. Journal of Computer and System Sciences, 2017]. In this work we settle this later problem by showing that EPDI can be solved in time f(d)* |V(G)|^{O(d)} for some computable function f. Our techniques can also be used to show that the embedded k-outerplanar diameter improvement problem (k-EOPDI), a variant of EPDI where the resulting graph is required to be k-outerplanar instead of planar, can be solved in time f(d)* |V(G)|^{O(k)} for some computable function f. This shows that for each fixed k, the problem k-EOPDI is strongly uniformly fixed parameter tractable with respect to the diameter parameter d.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10226The PACE 2018 Parameterized Algorithms and Computational Experiments Challenge: The Third Iteration
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10227
The Program Committee of the Third Parameterized Algorithms and Computational Experiments challenge (PACE 2018) reports on the third iteration of the PACE challenge. This year, all three tracks were dedicated to solve the Steiner Tree problem, in which, given an edge-weighted graph and a subset of its vertices called terminals, one has to find a minimum-weight subgraph which spans all the terminals. In Track A, the number of terminals was limited. In Track B, a tree-decomposition of the graph was provided in the input, and the treewidth was limited. Finally, Track C welcomed heuristics. Over 80 participants on 40 teams from 16 countries submitted their implementations to the competition.Fri, 25 Jan 2019 14:01:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10227Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10193
Front Matter, Table of Contents, Preface, Conference OrganizationThu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10193Observing the Uptake of a Language Change Making Strings Immutable
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10194
To address security concerns, a major change was introduced to the OCaml language and compiler which made strings immutable and introduced array of bytes as replacement for mutable strings. The change is progressively being pushed so that ultimately strings will be immutable. We have investigated the way OCaml package developers undertook the change. In this paper we report on a preliminary observation of software code from the main OCaml package management system. For this purpose we instrumented versions of the OCaml compiler to get precise information into the uptake of safe strings.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10194Identifying Barriers to Adoption for Rust through Online Discourse
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10195
Rust is a low-level programming language known for its unique approach to memory-safe systems programming and for its steep learning curve. To understand what makes Rust difficult to adopt, we surveyed the top Reddit and Hacker News posts and comments about Rust; from these online discussions, we identified three hypotheses about Rust's barriers to adoption. We found that certain key features, idioms, and integration patterns were not easily accessible to new users.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10195QDB: From Quantum Algorithms Towards Correct Quantum Programs
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10196
With the advent of small-scale prototype quantum computers, researchers can now code and run quantum algorithms that were previously proposed but not fully implemented. In support of this growing interest in quantum computing experimentation, programmers need new tools and techniques to write and debug QC code. In this work, we implement a range of QC algorithms and programs in order to discover what types of bugs occur and what defenses against those bugs are possible in QC programs. We conduct our study by running small-sized QC programs in QC simulators in order to replicate published results in QC implementations. Where possible, we cross-validate results from programs written in different QC languages for the same problems and inputs. Drawing on this experience, we provide a taxonomy for QC bugs, and we propose QC language features that would aid in writing correct code.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10196Understanding Java Usability by Mining GitHub Repositories
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10197
There is a need for better empirical methods in programming language design. This paper addresses that need by demonstrating how, by observing publicly available Java source code, we can infer usage and usability issues with the Java language. In this study, 1,746 GitHub projects were analyzed and some basic usage facts are reported.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10197Programming by Example: Efficient, but Not "Helpful"
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10198
Programming by example (PBE) is a powerful programming paradigm based on example driven synthesis. Users can provide examples, and a tool automatically constructs a program that satisfies the examples. To investigate the impact of PBE on real-world users, we built a study around StriSynth, a tool for shell scripting by example, and recruited 27 working IT professionals to participate. In our study we asked the users to complete three tasks with StriSynth, and the same three tasks with PowerShell, a traditional scripting language. We found that, although our participants completed the tasks more quickly with StriSynth, they reported that they believed PowerShell to be a more helpful tool.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10198A Randomized Controlled Trial on the Impact of Polyglot Programming in a Database Context
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10199
Using more than one programming language in the same project is common practice. Often, additional languages might be introduced to projects to solve specific issues. While the practice is common, it is unclear whether it has an impact on developer productivity. In this paper, we present a pilot study investigating what happens when programmers switch between programming languages. The experiment is a repeated measures double-blind randomized controlled trial with 3 groups with various kinds of code switching in a database context. Results provide a rigorous testing methodology that can be replicated by us or others and a theoretical backing for why these effects might exist from the linguistics literature.Thu, 24 Jan 2019 17:19:36 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10199Front Matter, Table of Contents, Preface, Conference Organization
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10181
Front Matter, Table of Contents, Preface, Conference OrganizationThu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10181Speeding Up BigClam Implementation on SNAP
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10182
We perform a detailed analysis of the C++ implementation of the Cluster Affiliation Model for Big Networks (BigClam) on the Stanford Network Analysis Project (SNAP). BigClam is a popular graph mining algorithm that is capable of finding overlapping communities in networks containing millions of nodes. Our analysis shows a key stage of the algorithm - determining if a node belongs to a community - dominates the runtime of the implementation, yet the computation is not parallelized. We show that by parallelizing computations across multiple threads using OpenMP we can speed up the algorithm by 5.3 times when solving large networks for communities, while preserving the integrity of the program and the result.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10182THRIFTY: Towards High Reduction In Flow Table memorY
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10183
The rapid evolution of information technology has compelled the ubiquitous systems and computing to adapt with this expeditious development. Because of its rigidity, computer networks failed to meet that evolvement for decades, however, the recently emerged paradigm of software-defined networks gives a glimpse of hope for a new networking architecture that provides more flexibility and adaptability. Fault tolerance is considered one of the key concerns with respect to the software-defined networks dependability. In this paper, we propose a new architecture, named THRIFTY, to ease the recovery process when failure occurs and save the storage space of forwarding elements, which is therefore aims to enhance the fault tolerance of software-defined networks. Unlike the prevailing concept of fault management, THRIFTY uses the Edge-Core technique to forward the incoming packets. THRIFTY is tailored to fit the only centrally controlled systems such as the new architecture of software-defined networks that interestingly maintain a global view of the entire network. The architecture of THRIFTY is illustrated and experimental study is reported showing the performance of the proposed method. Further directions are suggested in the context of scalability towards achieving further advances in this research area.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10183Data-Driven Chinese Walls
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10184
Security policy and access control models are often based on qualitative attributes, e.g. security labels, cryptographic credentials. In this paper, we enrich one such model, namely the Chinese Walls model, with quantitative attributes derived from data. Therefore, we advocate a data-driven approach that considers a quantitative definition of access we term, working relations.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10184Comparison of Platforms for Recommender Algorithm on Large Datasets
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10185
One of the challenges our society faces is the ever increasing amount of data. Among existing platforms that address the system requirements, Hadoop is a framework widely used to store and analyze "big data". On the human side, one of the aids to finding the things people really want is recommendation systems. This paper evaluates highly scalable parallel algorithms for recommendation systems with application to very large data sets. A particular goal is to evaluate an open source Java message passing library for parallel computing called MPJ Express, which has been integrated with Hadoop. As a demonstration we use MPJ Express to implement collaborative filtering on various data sets using the algorithm ALSWR (Alternating-Least-Squares with Weighted-lambda-Regularization). We benchmark the performance and demonstrate parallel speedup on Movielens and Yahoo Music data sets, comparing our results with two other frameworks: Mahout and Spark. Our results indicate that MPJ Express implementation of ALSWR has very competitive performance and scalability in comparison with the two other frameworks.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10185Towards Context-Aware Syntax Parsing and Tagging
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10186
Information retrieval (IR) has become one of the most popular Natural Language Processing (NLP) applications. Part of speech (PoS) parsing and tagging plays an important role in IR systems. A broad range of PoS parsers and taggers tools have been proposed with the aim of helping to find a solution for the information retrieval problems, but most of these are tools based on generic NLP tags which do not capture domain-related information. In this research, we present a domain-specific parsing and tagging approach that uses not only generic PoS tags but also domain-specific PoS tags, grammatical rules, and domain knowledge. Experimental results show that our approach has a good level of accuracy when applying it to different domains.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10186Evaluation of Rule-Based Learning and Feature Selection Approaches For Classification
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10187
Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10187The iBUG Eye Segmentation Dataset
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10188
This paper presents the first dataset for eye segmentation in low resolution images. Although eye segmentation has long been a vital preprocessing step in biometric applications, this work is the first to focus on low resolutions image that can be expected from a consumer-grade camera under conventional human-computer interaction and / or video-chat scenarios. Existing eye datasets have multiple limitations, including: (a) datasets only contain high resolution images; (b) datasets did not include enough pose variations; (c) a utility landmark ground truth did not be provided; (d) high accurate pixel-level ground truths had not be given. Our dataset meets all the above conditions and requirements for different segmentation methods. Besides, a baseline experiment has been performed on our dataset to evaluate the performances of landmark models (Active Appearance Model, Ensemble Regression Tree and Supervised Descent Method) and deep semantic segmentation models (Atrous convolutional neural network with conditional random field). Since the novelty of our dataset is to segment the iris and the sclera areas, we evaluate above models on sclera and iris only respectively in order to indicate the feasibility on eye-partial segmentation tasks. In conclusion, based on our dataset, deep segmentation methods performed better in terms of IOU-based ROC curves and it showed potential abilities on low-resolution eye segmentation task.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10188Anomaly Detection for Big Data Technologies
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10189
The main goal of this research is to contribute to automated performance anomaly detection for large-scale and complex distributed systems, especially for Big Data applications within cloud computing. The main points that we will investigate are: - Automated detection of anomalous performance behaviors by finding the relevant performance metrics with which to characterize behavior of systems. - Performance anomaly localization: To pinpoint the cause of a performance anomaly due to internal or external faults. - Investigation of the possibility of anomaly prediction. Failure prediction aims to determine the possible occurrences of catastrophic events in the near future and will enable system developers to utilize effective monitoring solutions to guarantee system availability. - Assessment for the potential of hybrid methods that combine machine learning with traditional methods used in performance for anomaly detection. The topic of this research proposal will offer me the opportunity to more deeply apply my interest in the field of performance anomaly detection and prediction by investigating and using novel optimization strategies. In addition, this research provides a very interesting case of utilizing the anomaly detection techniques in a large-scale Big Data and cloud computing environment. Among the various Big Data technologies, in-memory processing technology like Apache Spark has become widely adopted by industries as result of its speed, generality, ease of use, and compatibility with other Big Data systems. Although Spark is developing gradually, currently there are still shortages in comprehensive performance analyses that specifically build for Spark and are used to detect performance anomalies. Therefore, this raises my interest in addressing this challenge by investigating new hybrid learning techniques for anomaly detection in large-scale and complex systems, especially for in-memory processing Big Data platforms within cloud computing.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10189A Novel Method for Event Detection using Wireless Sensor Networks
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10190
Reliable event detection is one of the hottest research areas in the wireless sensor networks field these days. Battlefield monitoring, fire detection, nuclear and chemical attack, and gas leak detection are examples of the event detection applications. One of the main goals to WSNs is transmitting the sensed data to the sink (Base station) in an efficient way with minimum energy usage to achieve high degree of event detection reliability. Thus, Its very important to determine the reliability degree to know the number of data that are required to receive at the sink to achieve the desired reliability. Most of the previous research works proposed different solutions for reliable event detection. The idea of all these solutions is based on increasing the amount of the transmitted data to the sink by controlling the sources reporting rate. However, rising the reporting rate may lead to losing the transmitted data due to the network congestion and packets collision, and this is related to the restricted resources capacity of the network's sensor nodes. Therefore, in this paper, a new indoor method to achieve quality based event reliability for critical event detection have been implemented using hardware sensor nodes (Waspmote). The idea of this method is depending on sending the sensed data to the sink using a node called Cluster Head (CH) in a sequence according to their priority from the high to the low. The network nodes have been deployed in the experiment area into clusters, and each cluster have a CH node which work on collecting the cluster members readings and reorder it in descending order to send it next to the sink. The probability to deliver the important data to detect the event to the sink will increase by using this new method. The proposed mechanism intends to improve the event detection reliability, minimize the end-to-end delay, and increase the network lifetime. Experiments results show that the proposed method achieved a good the performance in terms of packets delivery, event detection, and end-to-end delay.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10190Context-Aware Adaptive Biometrics System using Multiagents
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10191
Traditional biometric systems are designed and configured to operate in predefined circumstances to address the needs of a particular application. The performance of such biometrics systems tend to decrease because when they encounter varying conditions as they are unable to adapt to such variations. Many real-life scenarios require identification systems to recognise uncooperative people in uncontrolled environments. Therefore, there is a real need to design biometric systems that are aware of their context and be able to adapt to changing conditions.The context-awareness and adaptation of a biometric system are based on a set of factors that include: the application (e.g. healthcare system, border control, unlock smart devices), environment (e.g. quiet/noisy, indoor/outdoor), desired and pre-defined requirements (e.g. speed, usability, reliability, accuracy, robustness to high/low quality samples), user of the system (e.g. cooperative or non-cooperative), the chosen modality (e.g. face, speech, gesture signature), and used techniques (e.g. pre-processing to normalise and clean biometrics data, feature extraction and classification). These factors are linked and might affect each other, hence the system has to work adaptively to meet its overall aim based to its operational context.The aim of this research is to develop a multiagent based framework to represent a context-aware adaptive biometric system. This is to improve the decision making process at each processing step of traditional biometric identification systems. Agents will be used to provide the system with intelligence, adaptation, flexibility, automation, and reliability during the identification process. The framework will accommodate at least five agents, one for each of the five main processing steps of a typical biometric system (i.e. data capture, pre-processing, feature extraction, classification and decision). Each agent can contribute differently towards its designated goal to achieve the best possible solution by selecting/ applying the best technique. For example, an agent can be used to assess the quality of the input biometric sample to ensure the important features can be extracted and processed in further steps. Another agent can be used to pre-process the biometric sample if necessary. A third agent is used to select the appropriate set of features followed by another to select a suitable classifier that works well in a given condition.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10191Harnessing AI For Research
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10192
Artificial Intelligence is increasingly being used to both augment existing fields of research and open up new avenues of discovery. From quality control for imaging flow cytometry to computational musicology, modern AI is an exciting new tool for research and thus knowing how to engineer AI systems in a research context is a vital new skill for RSEs to acquire. In this talk, I will outline four different areas of AI: supervised learning, unsupervised learning, interactive learning, and Bayesian learning. For each of these approaches, I will discuss how they typically map to different research problems and explore best practices for RSEs via specific use cases. At the end of the talk, you will have received a high-level overview of AI technologies and their use in research, have seen some cool examples of how AI has been used in a wide range of research areas, and have a good sense of where to go to learn more.Thu, 24 Jan 2019 14:24:11 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10192
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10180
Tue, 22 Jan 2019 14:50:06 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10180Dagstuhl Manifestos, Table of Contents, Volume 7, Issue 1, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10179
Dagstuhl Manifestos, Table of Contents, Volume 7, Issue 1, 2018Tue, 22 Jan 2019 14:49:52 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10179Dagstuhl Reports, Volume 8, Issue 7, July 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10178
Dagstuhl Reports, Volume 8, Issue 7, July 2018, Complete IssuTue, 22 Jan 2019 14:49:39 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10178Dagstuhl Reports, Table of Contents, Volume 8, Issue 7, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10177
Dagstuhl Reports, Table of Contents, Volume 8, Issue 7, 2018Tue, 22 Jan 2019 14:49:30 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10177Dagstuhl Reports, Volume 8, Issue 6, June 2018, Complete Issue
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10176
Dagstuhl Reports, Volume 8, Issue 6, June 2018, Complete IssueTue, 22 Jan 2019 14:49:23 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10176Dagstuhl Reports, Table of Contents, Volume 8, Issue 6, 2018
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10175
Table of Contents, FrontmatterTue, 22 Jan 2019 14:49:15 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10175Extreme Classification (Dagstuhl Seminar 18291)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10173
Extreme classification is a rapidly growing research area within machine learning focusing on multi-class and multi-label problems involving an extremely large number of labels (even more than a million). Many applications of extreme classification have been found in diverse areas ranging from language modeling to document tagging in NLP, face recognition to learning universal feature representations in computer vision, gene function prediction in bioinformatics, etc. Extreme classification has also opened up a new paradigm for key industrial applications such as ranking and recommendation by reformulating them as multi-label learning tasks where each item to be ranked or recommended is treated as a separate label. Such reformulations have led to significant gains over traditional collaborative filtering and content-based recommendation techniques. Consequently, extreme classifiers have been deployed in many real-world applications in industry.Extreme classification has raised many new research challenges beyond the pale of traditional machine learning including developing log-time and log-space algorithms, deriving theoretical bounds that scale logarithmically with the number of labels, learning from biased training data, developing performance metrics, etc. The seminar aimed at bringing together experts in machine learning, NLP, computer vision, web search and recommendation from academia and industry to make progress on these problems. We believe that this seminar has encouraged the inter-disciplinary collaborations in the area of extreme classification, started discussion on identification of thrust areas and important research problems, motivated to improve the algorithms upon the state-of-the-art, as well to work on the theoretical foundations of extreme classification.Tue, 22 Jan 2019 07:48:19 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10173Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices (Dagstuhl Seminar 18281)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10172
From the 8th of July 2018 to the 13th of July 2018, a Dagstuhl Seminar took place with the topic "Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices". There, 40 participants from as many as 14 distinct countries and four distinct research areas, dealing with running time analysis and space usage analysis of algorithms and data structures, gathered to discuss results and techniques to "go beyond the worst-case" for classes of structurally restricted inputs, both for (fast) algorithms and (compressed) data structures.The seminar consisted of (1) a first session of personal introduction, each participant presenting his expertise and themes of interests in two slides; (2) a series of four technical talks; and (3) a larger series of presentations of open problems, with ample time left for the participants to gather and work on such open problems.Tue, 22 Jan 2019 07:47:32 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10172In Situ Visualization for Computational Science (Dagstuhl Seminar 18271)
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10171
In situ visualization, i.e., visualizing simulation data as it is generated, is an emerging processing paradigm in response to trends in the area of high-performance computing. This paradigm holds great promise in its ability to access increased spatio-temporal resolution and leverage extensive computational power. However, the paradigm is also widely viewed as limiting when it comes to exploration-oriented use cases and further will require visualization systems to become more and more complicated and constrained. Additionally, there are many open research topics with in situ visualization. The Dagstuhl seminar 18271 "In Situ Visualization for Computational Science" brought together researchers and practitioners from three communities (computational science, high-performance computing, and scientific visualization) to share interesting findings, to identify lines of open research, and to determine a medium-term research agenda that addresses the most pressing problems. This report summarizes the outcomes and findings of the seminar.Tue, 22 Jan 2019 07:46:40 +0100http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=10171