LIPIcs, Volume 213

41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)



Thumbnail PDF

Event

FSTTCS 2021, December 15-17, 2021, Virtual Conference

Editors

Mikołaj Bojańczyk
  • University of Warsaw, Poland
Chandra Chekuri
  • University of Illinois, Urbana-Champaign, IL, US

Publication Details

  • published at: 2021-11-29
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-215-0
  • DBLP: db/conf/fsttcs/fsttcs2021

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 213, FSTTCS 2021, Complete Volume

Authors: Mikołaj Bojańczyk and Chandra Chekuri


Abstract
LIPIcs, Volume 213, FSTTCS 2021, Complete Volume

Cite as

41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 1-866, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{bojanczyk_et_al:LIPIcs.FSTTCS.2021,
  title =	{{LIPIcs, Volume 213, FSTTCS 2021, Complete Volume}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{1--866},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021},
  URN =		{urn:nbn:de:0030-drops-155102},
  doi =		{10.4230/LIPIcs.FSTTCS.2021},
  annote =	{Keywords: LIPIcs, Volume 213, FSTTCS 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Mikołaj Bojańczyk and Chandra Chekuri


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bojanczyk_et_al:LIPIcs.FSTTCS.2021.0,
  author =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.0},
  URN =		{urn:nbn:de:0030-drops-155113},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
BQP After 28 Years (Invited Talk)

Authors: Scott Aaronson


Abstract
I will discuss the now-ancient question of where BQP, Bounded-Error Quantum Polynomial-Time, fits in among classical complexity classes. After reviewing some basics from the 90s, I will discuss the Forrelation problem that I introduced in 2009 to yield an oracle separation between BQP and PH, and the dramatic completion of that program by Ran Raz and Avishay Tal in 2018. I will then discuss very recent work, with William Kretschmer and DeVon Ingram, which leverages the Raz-Tal theorem, along with a new "quantum-aware" random restriction method, to obtain results that illustrate just how differently BQP can behave from BPP. These include oracles relative to which NP^{BQP} ̸ ⊂ BQP^{PH} - solving a 2005 open problem of Lance Fortnow - and conversely, relative to which BQP^{NP} ̸ ⊂ PH^{BQP}; an oracle relative to which 𝖯 = NP and yet BQP ≠ QCMA; an oracle relative to which NP ⊆ BQP yet PH is infinite; an oracle relative to which 𝖯 = NP≠ BQP = PP; and an oracle relative to which PP = PostBQP ̸ ⊂ QMA^{QMA^{…}}. By popular demand, I will also speculate about the status of BQP in the unrelativized world.

Cite as

Scott Aaronson. BQP After 28 Years (Invited Talk). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aaronson:LIPIcs.FSTTCS.2021.1,
  author =	{Aaronson, Scott},
  title =	{{BQP After 28 Years}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.1},
  URN =		{urn:nbn:de:0030-drops-155124},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.1},
  annote =	{Keywords: quantum computing, complexity theory, oracle separations, circuit lower bounds}
}
Document
Invited Talk
State Complexity of Population Protocols (Invited Talk)

Authors: Javier Esparza


Abstract
Population protocols were introduced by Angluin et al. in 2004 to study the theoretical properties of networks of mobile sensors with very limited computational resources. They have also been proposed as a natural computing model, with molecules, cells, or microorganisms playing the role of sensors. In a population protocol an arbitrary number of indistinguishable, finite-state agents interact randomly in pairs to collectively decide if their initial global configuration satisfies a given property. The property is formalized as a predicate that maps each initial configuration to an output, 0 or 1. Starting from an initial configuration, the agents eventually agree to the correct output almost surely, and continue producing it forever. The protocol is said to stabilize to the correct output. It is well known that population protocols can decide exactly the semilinear predicates, or, equivalently, the predicates expressible in Presburger arithmetic. Current research concentrates on investigating the amount of resources needed to decide a given predicate. The standard resources, time and memory, translate for population protocols into expected time to stabilization, usually called parallel runtime, and number of states of each agent. In this talk we concentrate on the latter. A variant of population protocols allows for a leader, a distinguished finite-state agent that is added to the initial configuration and, intuitively, helps the other agents to organize the computation. In the last years my collaborators and I have obtained upper and lower bounds for the state complexity of population protocols with and without a leader. Define the state complexity of a predicate as the minimal number of states of a protocol that decides the predicate, and STATE(η) as the maximum state complexity of the predicates of size at most η, where predicates are encoded as quantifier-free formulas of Presburger arithmetic with coefficients written in binary. Using techniques from the theory of Petri nets and Vector Addition Systems, we have shown that STATE(η) is polynomially bounded, even for leaderless protocols; this improves on the exponential bound given in 2004 by Angluin and collaborators. We have also proved that STATE(η) ∈ Ω(log log η) for leaderless protocols, even for those deciding very simple predicates of the form x ≥ c for some constant c. In the talk I report on these results, and on two very recent, still unpublished results. Modulo the pending peer-review confirmation, the first result shows the existence of leaderless protocols with a polynomial number of states and linear parallel runtime, and the second, due to Leroux, gives a Ω((log log η)^{1/3}) lower bound for protocols with a leader.

Cite as

Javier Esparza. State Complexity of Population Protocols (Invited Talk). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{esparza:LIPIcs.FSTTCS.2021.2,
  author =	{Esparza, Javier},
  title =	{{State Complexity of Population Protocols}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.2},
  URN =		{urn:nbn:de:0030-drops-155139},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.2},
  annote =	{Keywords: Population protocols, state complexity, Petri nets}
}
Document
Invited Talk
Approximately Counting Graph Homomorphisms and Retractions (Invited Talk)

Authors: Leslie Ann Goldberg


Abstract
A homomorphism from a graph G to a graph H is a function from the vertices of G to the vertices of H that preserves the edges of G in the sense that every edge of G is mapped to an edge of H. By changing the target graph H, we can capture interesting structures in G. For example, homomorphisms from G to a k-clique H correspond to the proper k-colourings of G. There has been a lot of algorithmic work on the problem of (approximately) counting homomorphisms. The goal is to figure out for which graphs H the problem of approximately counting homomorphisms to H is algorithmically feasible. This talk will survey what is known. Despite much work, there are still plenty of open problems. We will discuss the problem of approximately counting list homomorphisms (where the input specifies, for each vertex of G, the list of vertices of H to which it can be mapped). Because the lists add extra expressibility, it is easier to prove that counting homomorphisms to a particular graph H is intractable. In fact, we have a full trichotomy (joint work with Galanis and Jerrum, 2017). Here, the complexity of homomorphism-counting is related to certain hereditary graph classes. The trichotomy will be explained in the talk - no prior knowledge of the area will be assumed. In more recent work, with Focke and Živn{ý}, we have investigated the complexity of counting retractions to H - this problem falls between homomorphism-counting and list-homomorphism counting. Here we have only a partial classification, which applies to all square-free graphs H. So again, there are plenty of open problems.

Cite as

Leslie Ann Goldberg. Approximately Counting Graph Homomorphisms and Retractions (Invited Talk). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.FSTTCS.2021.3,
  author =	{Goldberg, Leslie Ann},
  title =	{{Approximately Counting Graph Homomorphisms and Retractions}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.3},
  URN =		{urn:nbn:de:0030-drops-155146},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.3},
  annote =	{Keywords: Graph homomorphisms, counting}
}
Document
Invited Talk
Indistinguishability Obfuscation from Well-Founded Assumptions (Invited Talk)

Authors: Huijia (Rachel) Lin


Abstract
Indistinguishability obfuscation, introduced by Barak et al. [Crypto 2001], aims to compile programs into unintelligible ones while preserving functionality. It is a fascinating and powerful object that has been shown to enable a host of new cryptographic goals and beyond. However, constructions of indistinguishability obfuscation have remained elusive, with all other proposals relying on heuristics or newly conjectured hardness assumptions. In this work, we show how to construct indistinguishability obfuscation from the subexponential hardness of three well-founded assumptions. We prove the following. Theorem (Informal) Assume sub-exponential hardness for the following: - the Learning Parity with Noise (LPN) assumption over general prime fields 𝔽_p with polynomially many LPN samples and error rate 1/k^δ, where k is the dimension of the LPN secret, and δ > 0 is any constant; - the existence of a Boolean Pseudo-Random Generator (PRG) in NC⁰ with stretch n^(1+τ), where n is the length of the PRG seed, and τ > 0 is any constant; - the Decision Linear (DLIN) assumption on symmetric bilinear groups of prime order. Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exist. As a corollary, all cryptographic goals that can be achieved using indistinguishability obfuscation can now be achieved assuming the above three assumptions. This includes fully homomorphic encryption, functional encryption, multiparty non-interactive key-exchange, succinct garbled random access machine, and many others. This is joint work with Aayush Jain (UCLA and NTT Research) and Amit Sahai (UCLA).

Cite as

Huijia (Rachel) Lin. Indistinguishability Obfuscation from Well-Founded Assumptions (Invited Talk). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, p. 4:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lin:LIPIcs.FSTTCS.2021.4,
  author =	{Lin, Huijia (Rachel)},
  title =	{{Indistinguishability Obfuscation from Well-Founded Assumptions}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{4:1--4:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.4},
  URN =		{urn:nbn:de:0030-drops-155154},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.4},
  annote =	{Keywords: Cryptography, indistinguishability obfuscation}
}
Document
Invited Talk
The Complexity of Gradient Descent (Invited Talk)

Authors: Rahul Savani


Abstract
PPAD and PLS are successful classes that capture the complexity of important game-theoretic problems. For example, finding a mixed Nash equilibrium in a bimatrix game is PPAD-complete, and finding a pure Nash equilibrium in a congestion game is PLS-complete. Many important problems, such as solving a Simple Stochastic Game or finding a mixed Nash equilibrium of a congestion game, lie in both classes. It was strongly believed that their intersection, PPAD ∩ PLS, does not have natural complete problems. We show that it does: any problem that lies in both classes can be reduced in polynomial time to the problem of finding a stationary point of a continuously differentiable function on the domain [0,1]². Thus, as PPAD captures problems that can be solved by Lemke-Howson type complementary pivoting algorithms, and PLS captures problems that can be solved by local search, we show that PPAD ∩ PLS exactly captures problems that can be solved by Gradient Descent. This is joint work with John Fearnley, Paul Goldberg, and Alexandros Hollender. It appeared at STOC'21, where it was given a Best Paper Award [Fearnley et al., 2021].

Cite as

Rahul Savani. The Complexity of Gradient Descent (Invited Talk). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 5:1-5:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{savani:LIPIcs.FSTTCS.2021.5,
  author =	{Savani, Rahul},
  title =	{{The Complexity of Gradient Descent}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{5:1--5:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-155167},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.5},
  annote =	{Keywords: Computational Complexity, Continuous Optimization, TFNP, PPAD, PLS, CLS, UEOPL}
}
Document
Scheduling in the Secretary Model

Authors: Susanne Albers and Maximilian Janke


Abstract
This paper studies online makespan minimization in the secretary model. Jobs, specified by their processing times, are presented in a uniformly random order. The input size n is known in advance. An online algorithm has to non-preemptively assign each job permanently and irrevocably to one of m parallel and identical machines such that the expected time it takes to process them all, the makespan, is minimized. We give two deterministic algorithms. First, a straightforward adaptation of the semi-online strategy Light Load [Albers and Hellwig, 2012] provides a very simple approach retaining its competitive ratio of 1.75. A new and sophisticated algorithm is 1.535-competitive. These competitive ratios are not only obtained in expectation but, in fact, for all but a very tiny fraction of job orders. Classically, online makespan minimization only considers the worst-case order. Here, no competitive ratio below 1.885 for deterministic algorithms and 1.581 using randomization is possible. The best randomized algorithm so far is 1.916-competitive. Our results show that classical worst-case orders are quite rare and pessimistic for many applications. We complement our results by providing first lower bounds. A competitive ratio obtained on nearly all possible job orders must be at least 1.257. This implies a lower bound of 1.043 for both deterministic and randomized algorithms in the general model.

Cite as

Susanne Albers and Maximilian Janke. Scheduling in the Secretary Model. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 6:1-6:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{albers_et_al:LIPIcs.FSTTCS.2021.6,
  author =	{Albers, Susanne and Janke, Maximilian},
  title =	{{Scheduling in the Secretary Model}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{6:1--6:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.6},
  URN =		{urn:nbn:de:0030-drops-155172},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.6},
  annote =	{Keywords: Scheduling, makespan minimization, online algorithm, competitive analysis, lower bound, random-order, secretary problem}
}
Document
One-Way Functions and a Conditional Variant of MKTP

Authors: Eric Allender, Mahdi Cheraghchi, Dimitrios Myrisiotis, Harsha Tirumala, and Ilya Volkovich


Abstract
One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural NP-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1) First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2) Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. 3) Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. In fact, building on recently-announced results of Ren and Santhanam [Rahul Ilango et al., 2021], we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs.

Cite as

Eric Allender, Mahdi Cheraghchi, Dimitrios Myrisiotis, Harsha Tirumala, and Ilya Volkovich. One-Way Functions and a Conditional Variant of MKTP. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 7:1-7:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.FSTTCS.2021.7,
  author =	{Allender, Eric and Cheraghchi, Mahdi and Myrisiotis, Dimitrios and Tirumala, Harsha and Volkovich, Ilya},
  title =	{{One-Way Functions and a Conditional Variant of MKTP}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{7:1--7:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.7},
  URN =		{urn:nbn:de:0030-drops-155181},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.7},
  annote =	{Keywords: Kolmogorov complexity, KT Complexity, Minimum KT-complexity Problem, MKTP, Conditional KT Complexity, Minimum Conditional KT-complexity Problem, McKTP, one-way functions, OWFs, average-case hardness, pseudorandom generators, PRGs, pseudorandom functions, PRFs, distinguishers, learning algorithms, NP-completeness, reductions}
}
Document
Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings

Authors: Shashwat Banchhor, Rishikesh Gajjala, Yogish Sabharwal, and Sandeep Sen


Abstract
In this paper, we study the problem of designing prefix-free encoding schemes having minimum average code length that can be decoded efficiently under a decode cost model that captures memory hierarchy induced cost functions. We also study a special case of this problem that is closely related to the length limited Huffman coding (LLHC) problem; we call this the soft-length limited Huffman coding problem. In this version, there is a penalty associated with each of the n characters of the alphabet whose encodings exceed a specified bound D(≤ n) where the penalty increases linearly with the length of the encoding beyond D. The goal of the problem is to find a prefix-free encoding having minimum average code length and total penalty within a pre-specified bound P. This generalizes the LLHC problem. We present an algorithm to solve this problem that runs in time O(nD). We study a further generalization in which the penalty function and the objective function can both be arbitrary monotonically non-decreasing functions of the codeword length. We provide dynamic programming based exact and PTAS algorithms for this setting.

Cite as

Shashwat Banchhor, Rishikesh Gajjala, Yogish Sabharwal, and Sandeep Sen. Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 8:1-8:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{banchhor_et_al:LIPIcs.FSTTCS.2021.8,
  author =	{Banchhor, Shashwat and Gajjala, Rishikesh and Sabharwal, Yogish and Sen, Sandeep},
  title =	{{Generalizations of Length Limited Huffman Coding for Hierarchical Memory Settings}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{8:1--8:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-155193},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.8},
  annote =	{Keywords: Approximation algorithms, Hierarchical memory, Prefix free codes}
}
Document
Approximation Algorithms for Flexible Graph Connectivity

Authors: Sylvia Boyd, Joseph Cheriyan, Arash Haddadan, and Sharat Ibrahimpur


Abstract
We present approximation algorithms for several network design problems in the model of Flexible Graph Connectivity (Adjiashvili, Hommelsheim and Mühlenthaler, "Flexible Graph Connectivity", Math. Program. pp. 1-33 (2021), IPCO 2020: pp. 13-26). In an instance of the Flexible Graph Connectivity (FGC) problem, we have an undirected connected graph G = (V,E), a partition of E into a set of safe edges S and a set of unsafe edges U, and nonnegative costs {c_e}_{e ∈ E} on the edges. A subset F ⊆ E of edges is feasible for FGC if for any unsafe edge e ∈ F ∩ U, the subgraph (V,F⧵{e}) is connected. The algorithmic goal is to find a (feasible) solution F that minimizes c(F) = ∑_{e ∈ F} c_e. We present a simple 2-approximation algorithm for FGC via a reduction to the minimum-cost r-out 2-arborescence problem. This improves upon the 2.527-approximation algorithm of Adjiashvili et al. For integers p ≥ 1 and q ≥ 0, the (p,q)-FGC problem is a generalization of FGC where we seek a minimum-cost subgraph H = (V,F) that remains p-edge connected against the failure of any set of at most q unsafe edges; that is, for any set F' ⊆ U with |F'| ≤ q, H-F' = (V, F ⧵ F') should be p-edge connected. Note that FGC corresponds to the (1,1)-FGC problem. We give approximation algorithms for two important special cases of (p,q)-FGC: (a) Our 2-approximation algorithm for FGC extends to a (k+1)-approximation algorithm for the (1,k)-FGC problem. (b) We present a 4-approximation algorithm for the (k,1)-FGC problem. For the unweighted FGC problem, where each edge has unit cost, we give a 16/11-approximation algorithm. This improves on the result of Adjiashvili et al. for this problem. The (p,q)-FGC model with p = 1 or q ≤ 1 can be cast as the Capacitated k-Connected Subgraph problem which is a special case of the well-known Capacitated Network Design problem. We denote the former problem by Cap-k-ECSS. An instance of this problem consists of an undirected graph G = (V,E), nonnegative integer edge-capacities {u_e}_{e ∈ E}, nonnegative edge-costs {c_e}_{e ∈ E}, and a positive integer k. The goal is to find a minimum-cost edge-set F ⊆ E such that every (non-trivial) cut of the capacitated subgraph H(V,F,u) has capacity at least k. We give a min(k, 2max_{e ∈ E} u_e)-approximation algorithm for this problem.

Cite as

Sylvia Boyd, Joseph Cheriyan, Arash Haddadan, and Sharat Ibrahimpur. Approximation Algorithms for Flexible Graph Connectivity. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{boyd_et_al:LIPIcs.FSTTCS.2021.9,
  author =	{Boyd, Sylvia and Cheriyan, Joseph and Haddadan, Arash and Ibrahimpur, Sharat},
  title =	{{Approximation Algorithms for Flexible Graph Connectivity}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{9:1--9:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.9},
  URN =		{urn:nbn:de:0030-drops-155206},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.9},
  annote =	{Keywords: Approximation Algorithms, Combinatorial Optimization, Network Design, Edge-Connectivity of Graphs, Reliability of Networks}
}
Document
Tight Chang’s-Lemma-Type Bounds for Boolean Functions

Authors: Sourav Chakraborty, Nikhil S. Mande, Rajat Mittal, Tulasimohan Molli, Manaswi Paraashar, and Swagato Sanyal


Abstract
Chang’s lemma (Duke Mathematical Journal, 2002) is a classical result in mathematics, with applications spanning across additive combinatorics, combinatorial number theory, analysis of Boolean functions, communication complexity and algorithm design. For a Boolean function f that takes values in {-1, 1} let r(f) denote its Fourier rank (i.e., the dimension of the span of its Fourier support). For each positive threshold t, Chang’s lemma provides a lower bound on δ(f) := Pr[f(x) = -1] in terms of the dimension of the span of its characters with Fourier coefficients of magnitude at least 1/t. In this work we examine the tightness of Chang’s lemma with respect to the following three natural settings of the threshold: - the Fourier sparsity of f, denoted k(f), - the Fourier max-supp-entropy of f, denoted k'(f), defined to be the maximum value of the reciprocal of the absolute value of a non-zero Fourier coefficient, - the Fourier max-rank-entropy of f, denoted k''(f), defined to be the minimum t such that characters whose coefficients are at least 1/t in magnitude span a r(f)-dimensional space. In this work we prove new lower bounds on δ(f) in terms of the above measures. One of our lower bounds, δ(f) = Ω(r(f)²/(k(f) log² k(f))), subsumes and refines the previously best known upper bound r(f) = O(√{k(f)}log k(f)) on r(f) in terms of k(f) by Sanyal (Theory of Computing, 2019). We improve upon this bound and show r(f) = O(√{k(f)δ(f)}log k(f)). Another lower bound, δ(f) = Ω(r(f)/(k''(f) log k(f))), is based on our improvement of a bound by Chattopadhyay, Hatami, Lovett and Tal (ITCS, 2019) on the sum of absolute values of level-1 Fourier coefficients in terms of 𝔽₂-degree. We further show that Chang’s lemma for the above-mentioned choices of the threshold is asymptotically outperformed by our bounds for most settings of the parameters involved. Next, we show that our bounds are tight for a wide range of the parameters involved, by constructing functions witnessing their tightness. All the functions we construct are modifications of the Addressing function, where we replace certain input variables by suitable functions. Our final contribution is to construct Boolean functions f for which our lower bounds asymptotically match δ(f), and for any choice of the threshold t, the lower bound obtained from Chang’s lemma is asymptotically smaller than δ(f). Our results imply more refined deterministic one-way communication complexity upper bounds for XOR functions. Given the wide-ranging application of Chang’s lemma to areas like additive combinatorics, learning theory and communication complexity, we strongly feel that our refinements of Chang’s lemma will find many more applications.

Cite as

Sourav Chakraborty, Nikhil S. Mande, Rajat Mittal, Tulasimohan Molli, Manaswi Paraashar, and Swagato Sanyal. Tight Chang’s-Lemma-Type Bounds for Boolean Functions. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 10:1-10:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2021.10,
  author =	{Chakraborty, Sourav and Mande, Nikhil S. and Mittal, Rajat and Molli, Tulasimohan and Paraashar, Manaswi and Sanyal, Swagato},
  title =	{{Tight Chang’s-Lemma-Type Bounds for Boolean Functions}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{10:1--10:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.10},
  URN =		{urn:nbn:de:0030-drops-155215},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.10},
  annote =	{Keywords: Analysis of Boolean functions, Chang’s lemma, Parity decision trees, Fourier dimension}
}
Document
Approximate Trace Reconstruction via Median String (In Average-Case)

Authors: Diptarka Chakraborty, Debarati Das, and Robert Krauthgamer


Abstract
We consider an approximate version of the trace reconstruction problem, where the goal is to recover an unknown string s ∈ {0,1}ⁿ from m traces (each trace is generated independently by passing s through a probabilistic insertion-deletion channel with rate p). We present a deterministic near-linear time algorithm for the average-case model, where s is random, that uses only three traces. It runs in near-linear time Õ(n) and with high probability reports a string within edit distance Õ(p² n) from s, which significantly improves over the straightforward bound of O(pn). Technically, our algorithm computes a (1+ε)-approximate median of the three input traces. To prove its correctness, our probabilistic analysis shows that an approximate median is indeed close to the unknown s. To achieve a near-linear time bound, we have to bypass the well-known dynamic programming algorithm that computes an optimal median in time O(n³).

Cite as

Diptarka Chakraborty, Debarati Das, and Robert Krauthgamer. Approximate Trace Reconstruction via Median String (In Average-Case). In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2021.11,
  author =	{Chakraborty, Diptarka and Das, Debarati and Krauthgamer, Robert},
  title =	{{Approximate Trace Reconstruction via Median String (In Average-Case)}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.11},
  URN =		{urn:nbn:de:0030-drops-155228},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.11},
  annote =	{Keywords: Trace Reconstruction, Approximation Algorithms, Edit Distance, String Median}
}
Document
Approximating the Center Ranking Under Ulam

Authors: Diptarka Chakraborty, Kshitij Gajjar, and Agastya Vibhuti Jha


Abstract
We study the problem of approximating a center under the Ulam metric. The Ulam metric, defined over a set of permutations over [n], is the minimum number of move operations (deletion plus insertion) to transform one permutation into another. The Ulam metric is a simpler variant of the general edit distance metric. It provides a measure of dissimilarity over a set of rankings/permutations. In the center problem, given a set of permutations, we are asked to find a permutation (not necessarily from the input set) that minimizes the maximum distance to the input permutations. This problem is also referred to as maximum rank aggregation under Ulam. So far, we only know of a folklore 2-approximation algorithm for this NP-hard problem. Even for constantly many permutations, we do not know anything better than an exhaustive search over all n! permutations. In this paper, we achieve a (3/2 - 1/(3m))-approximation of the Ulam center in time n^O(m² ln m), for m input permutations over [n]. We therefore get a polynomial time bound while achieving better than a 3/2-approximation for constantly many permutations. This problem is of special interest even for constantly many permutations because under certain dissimilarity measures over rankings, even for four permutations, the problem is NP-hard. In proving our result, we establish a surprising connection between the approximate Ulam center problem and the closest string with wildcards problem (the center problem over the Hamming metric, allowing wildcards). We further study the closest string with wildcards problem and show that there cannot exist any (2-ε)-approximation algorithm (for any ε > 0) for it unless 𝖯 = NP. This inapproximability result is in sharp contrast with the same problem without wildcards, where we know of a PTAS.

Cite as

Diptarka Chakraborty, Kshitij Gajjar, and Agastya Vibhuti Jha. Approximating the Center Ranking Under Ulam. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 12:1-12:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2021.12,
  author =	{Chakraborty, Diptarka and Gajjar, Kshitij and Jha, Agastya Vibhuti},
  title =	{{Approximating the Center Ranking Under Ulam}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{12:1--12:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.12},
  URN =		{urn:nbn:de:0030-drops-155230},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.12},
  annote =	{Keywords: Center Problem, Ulam Metric, Edit Distance, Closest String, Approximation Algorithms}
}
Document
Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture

Authors: Arkadev Chattopadhyay, Ankit Garg, and Suhail Sherif


Abstract
We give improved separations for the query complexity analogue of the log-approximate-rank conjecture i.e. we show that there are a plethora of total Boolean functions on n input bits, each of which has approximate Fourier sparsity at most O(n³) and randomized parity decision tree complexity Θ(n). This improves upon the recent work of Chattopadhyay, Mande and Sherif [Chattopadhyay et al., 2020] both qualitatively (in terms of designing a large number of examples) and quantitatively (shrinking the gap from quartic to cubic). We leave open the problem of proving a randomized communication complexity lower bound for XOR compositions of our examples. A linear lower bound would lead to new and improved refutations of the log-approximate-rank conjecture. Moreover, if any of these compositions had even a sub-linear cost randomized communication protocol, it would demonstrate that randomized parity decision tree complexity does not lift to randomized communication complexity in general (with the XOR gadget).

Cite as

Arkadev Chattopadhyay, Ankit Garg, and Suhail Sherif. Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.FSTTCS.2021.13,
  author =	{Chattopadhyay, Arkadev and Garg, Ankit and Sherif, Suhail},
  title =	{{Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.13},
  URN =		{urn:nbn:de:0030-drops-155245},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.13},
  annote =	{Keywords: Approximate Rank, Randomized Parity Decision Trees, Randomized Communication Complexity, XOR functions, Subspace Designs}
}
Document
Functional Lower Bounds for Restricted Arithmetic Circuits of Depth Four

Authors: Suryajith Chillara


Abstract
Recently, Forbes, Kumar and Saptharishi [CCC, 2016] proved that there exists an explicit d^{O(1)}-variate and degree d polynomial P_{d} ∈ VNP such that if any depth four circuit C of bounded formal degree d which computes a polynomial of bounded individual degree O(1), that is functionally equivalent to P_d, then C must have size 2^Ω(√dlog{d}). The motivation for their work comes from Boolean Circuit Complexity. Based on a characterization for ACC⁰ circuits by Yao [FOCS, 1985] and Beigel and Tarui [CC, 1994], Forbes, Kumar and Saptharishi [CCC, 2016] observed that functions in ACC⁰ can also be computed by algebraic Σ∧ΣΠ circuits (i.e., circuits of the form - sums of powers of polynomials) of 2^(log^O(1) n) size. Thus they argued that a 2^{ω(polylog n)} "functional" lower bound for an explicit polynomial Q against Σ∧ΣΠ circuits would imply a lower bound for the "corresponding Boolean function" of Q against non-uniform ACC⁰. In their work, they ask if their lower bound be extended to Σ∧ΣΠ circuits. In this paper, for large integers n and d such that ω(log²n) ≤ d ≤ n^{0.01}, we show that any Σ∧ΣΠ circuit of bounded individual degree at most O(d/k²) that functionally computes Iterated Matrix Multiplication polynomial IMM_{n,d} (∈ VP) over {0,1}^{n²d} must have size n^Ω(k). Since Iterated Matrix Multiplication IMM_{n,d} over {0,1}^{n²d} is functionally in GapL, improvement of the afore mentioned lower bound to hold for quasipolynomially large values of individual degree would imply a fine-grained separation of ACC⁰ from GapL. For the sake of completeness, we also show a syntactic size lower bound against any Σ∧ΣΠ circuit computing IMM_{n,d} (for the same regime of d) which is tight over large fields. Like Forbes, Kumar and Saptharishi [CCC, 2016], we too prove lower bounds against circuits of bounded formal degree which functionally compute IMM_{n,d}, for a slightly larger range of individual degree.

Cite as

Suryajith Chillara. Functional Lower Bounds for Restricted Arithmetic Circuits of Depth Four. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chillara:LIPIcs.FSTTCS.2021.14,
  author =	{Chillara, Suryajith},
  title =	{{Functional Lower Bounds for Restricted Arithmetic Circuits of Depth Four}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{14:1--14:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.14},
  URN =		{urn:nbn:de:0030-drops-155251},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.14},
  annote =	{Keywords: Functional Lower Bounds, Boolean Circuit Lower Bounds, Depth Four, Connections to Boolean Complexity, Iterated Matrix Multiplication}
}
Document
On (Simple) Decision Tree Rank

Authors: Yogesh Dahiya and Meena Mahajan


Abstract
In the decision tree computation model for Boolean functions, the depth corresponds to query complexity, and size corresponds to storage space. The depth measure is the most well-studied one, and is known to be polynomially related to several non-computational complexity measures of functions such as certificate complexity. The size measure is also studied, but to a lesser extent. Another decision tree measure that has received very little attention is the minimal rank of the decision tree, first introduced by Ehrenfeucht and Haussler in 1989. This measure is not polynomially related to depth, and hence it can reveal additional information about the complexity of a function. It is characterised by the value of a Prover-Delayer game first proposed by Pudlák and Impagliazzo in the context of tree-like resolution proofs. In this paper we study this measure further. We obtain upper and lower bounds on rank in terms of (variants of) certificate complexity. We also obtain upper and lower bounds on the rank for composed functions in terms of the depth of the outer function and the rank of the inner function. We compute the rank exactly for several natural functions and use them to show that all the bounds we have obtained are tight. We also observe that the size-rank relationship for decision trees, obtained by Ehrenfeucht and Haussler, is tight upto constant factors.

Cite as

Yogesh Dahiya and Meena Mahajan. On (Simple) Decision Tree Rank. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 15:1-15:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dahiya_et_al:LIPIcs.FSTTCS.2021.15,
  author =	{Dahiya, Yogesh and Mahajan, Meena},
  title =	{{On (Simple) Decision Tree Rank}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{15:1--15:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.15},
  URN =		{urn:nbn:de:0030-drops-155263},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.15},
  annote =	{Keywords: Boolean functions, Decision trees, certificate complexity, rank}
}
Document
Reachability and Matching in Single Crossing Minor Free Graphs

Authors: Samir Datta, Chetan Gupta, Rahul Jain, Anish Mukherjee, Vimal Raj Sharma, and Raghunath Tewari


Abstract
We show that for each single crossing graph H, a polynomially bounded weight function for all H-minor free graphs G can be constructed in logspace such that it gives nonzero weights to all the cycles in G. This class of graphs subsumes almost all classes of graphs for which such a weight function is known to be constructed in logspace. As a consequence, we obtain that for the class of H-minor free graphs where H is a single crossing graph, reachability can be solved in UL, and bipartite maximum matching can be solved in SPL, which are small subclasses of the parallel complexity class NC. In the restrictive case of bipartite graphs, our maximum matching result improves upon the recent result of Eppstein and Vazirani [David Eppstein and Vijay V. Vazirani, 2021], where they show an NC bound for constructing perfect matching in general single crossing minor free graphs.

Cite as

Samir Datta, Chetan Gupta, Rahul Jain, Anish Mukherjee, Vimal Raj Sharma, and Raghunath Tewari. Reachability and Matching in Single Crossing Minor Free Graphs. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{datta_et_al:LIPIcs.FSTTCS.2021.16,
  author =	{Datta, Samir and Gupta, Chetan and Jain, Rahul and Mukherjee, Anish and Sharma, Vimal Raj and Tewari, Raghunath},
  title =	{{Reachability and Matching in Single Crossing Minor Free Graphs}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{16:1--16:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.16},
  URN =		{urn:nbn:de:0030-drops-155277},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.16},
  annote =	{Keywords: Reachability, Matching, Logspace, Single-crossing minor free graphs}
}
Document
Approximating the Number of Prime Factors Given an Oracle to Euler’s Totient Function

Authors: Yang Du and Ilya Volkovich


Abstract
In this work we devise the first efficient deterministic algorithm for approximating ω(N) - the number of prime factors of an integer N ∈ ℕ, given in addition oracle access to Euler’s Totient function Φ(⋅). We also show that the algorithm can be extended to handle a more general class of additive functions that "depend solely on the exponents in the prime factorization of an integer". In particular, our result gives the first algorithm that approximates ω(N) without necessarily factoring N. Indeed, all the previously known algorithms for computing or even approximating ω(N) entail factorization of N, and therefore are either randomized [M. O. Rabin, 1980; D. L. Long, 1981] or require the Generalized Riemann Hypothesis (GRH) [G. L. Miller, 1976]. Our approach combines an application of Coppersmith’s method for finding non-trivial factors of integers whose prime factors satisfy certain "relative size" conditions of [F. Morain et al., 2018], together with a new upper bound on Φ(N) in terms of ω(N) which could be of independent interest.

Cite as

Yang Du and Ilya Volkovich. Approximating the Number of Prime Factors Given an Oracle to Euler’s Totient Function. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 17:1-17:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{du_et_al:LIPIcs.FSTTCS.2021.17,
  author =	{Du, Yang and Volkovich, Ilya},
  title =	{{Approximating the Number of Prime Factors Given an Oracle to Euler’s Totient Function}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{17:1--17:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.17},
  URN =		{urn:nbn:de:0030-drops-155286},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.17},
  annote =	{Keywords: Euler’s Totient Function, Integer Factorization, Number of Prime Factors, Derandomization}
}
Document
Fully Dynamic Algorithms for Knapsack Problems with Polylogarithmic Update Time

Authors: Franziska Eberle, Nicole Megow, Lukas Nölke, Bertrand Simon, and Andreas Wiese


Abstract
Knapsack problems are among the most fundamental problems in optimization. In the Multiple Knapsack problem, we are given multiple knapsacks with different capacities and items with values and sizes. The task is to find a subset of items of maximum total value that can be packed into the knapsacks without exceeding the capacities. We investigate this problem and special cases thereof in the context of dynamic algorithms and design data structures that efficiently maintain near-optimal knapsack solutions for dynamically changing input. More precisely, we handle the arrival and departure of individual items or knapsacks during the execution of the algorithm with worst-case update time polylogarithmic in the number of items. As the optimal and any approximate solution may change drastically, we maintain implicit solutions and support polylogarithmic time query operations that can return the computed solution value and the packing of any given item. While dynamic algorithms are well-studied in the context of graph problems, there is hardly any work on packing problems (and generally much less on non-graph problems). Motivated by the theoretical interest in knapsack problems and their practical relevance, our work bridges this gap.

Cite as

Franziska Eberle, Nicole Megow, Lukas Nölke, Bertrand Simon, and Andreas Wiese. Fully Dynamic Algorithms for Knapsack Problems with Polylogarithmic Update Time. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 18:1-18:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{eberle_et_al:LIPIcs.FSTTCS.2021.18,
  author =	{Eberle, Franziska and Megow, Nicole and N\"{o}lke, Lukas and Simon, Bertrand and Wiese, Andreas},
  title =	{{Fully Dynamic Algorithms for Knapsack Problems with Polylogarithmic Update Time}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{18:1--18:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.18},
  URN =		{urn:nbn:de:0030-drops-155297},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.18},
  annote =	{Keywords: Fully dynamic algorithms, knapsack problem, approximation schemes}
}
Document
Largest Similar Copies of Convex Polygons in Polygonal Domains

Authors: Taekang Eom, Seungjun Lee, and Hee-Kap Ahn


Abstract
Given a convex polygon with k vertices and a polygonal domain consisting of polygonal obstacles with n vertices in total in the plane, we study the optimization problem of finding a largest similar copy of the polygon that can be placed in the polygonal domain without intersecting the obstacles. We present an upper bound O(k²n²λ₄(k)) on the number of combinatorial changes occurred to the underlying structure during the rotation of the polygon, together with an O(k²n²λ₄(k)log n)-time deterministic algorithm for the problem. This improves upon the previously best known results by Chew and Kedem [SoCG89, CGTA93] and Sharir and Toledo [SoCG91, CGTA94] on the problem in more than 27 years. Our result also improves the time complexity of the high-clearance motion planning algorithm by Chew and Kedem.

Cite as

Taekang Eom, Seungjun Lee, and Hee-Kap Ahn. Largest Similar Copies of Convex Polygons in Polygonal Domains. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 19:1-19:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{eom_et_al:LIPIcs.FSTTCS.2021.19,
  author =	{Eom, Taekang and Lee, Seungjun and Ahn, Hee-Kap},
  title =	{{Largest Similar Copies of Convex Polygons in Polygonal Domains}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{19:1--19:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.19},
  URN =		{urn:nbn:de:0030-drops-155300},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.19},
  annote =	{Keywords: Polygon placement, Largest similar copy, Polygonal domain}
}
Document
A Faster Algorithm for Finding Closest Pairs in Hamming Metric

Authors: Andre Esser, Robert Kübler, and Floyd Zweydinger


Abstract
We study the Closest Pair Problem in Hamming metric, which asks to find the pair with the smallest Hamming distance in a collection of binary vectors. We give a new randomized algorithm for the problem on uniformly random input outperforming previous approaches whenever the dimension of input points is small compared to the dataset size. For moderate to large dimensions, our algorithm matches the time complexity of the previously best-known locality sensitive hashing based algorithms. Technically our algorithm follows similar design principles as Dubiner (IEEE Trans. Inf. Theory 2010) and May-Ozerov (Eurocrypt 2015). Besides improving the time complexity in the aforementioned areas, we significantly simplify the analysis of these previous works. We give a modular analysis, which allows us to investigate the performance of the algorithm also on non-uniform input distributions. Furthermore, we give a proof of concept implementation of our algorithm which performs well in comparison to a quadratic search baseline. This is the first step towards answering an open question raised by May and Ozerov regarding the practicability of algorithms following these design principles.

Cite as

Andre Esser, Robert Kübler, and Floyd Zweydinger. A Faster Algorithm for Finding Closest Pairs in Hamming Metric. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{esser_et_al:LIPIcs.FSTTCS.2021.20,
  author =	{Esser, Andre and K\"{u}bler, Robert and Zweydinger, Floyd},
  title =	{{A Faster Algorithm for Finding Closest Pairs in Hamming Metric}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.20},
  URN =		{urn:nbn:de:0030-drops-155317},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.20},
  annote =	{Keywords: closest pair problem, LSH, nearest neighbor}
}
Document
ETH Tight Algorithms for Geometric Intersection Graphs: Now in Polynomial Space

Authors: Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, and Saket Saurabh


Abstract
De Berg et al. in [SICOMP 2020] gave an algorithmic framework for subexponential algorithms on geometric graphs with tight (up to ETH) running times. This framework is based on dynamic programming on graphs of weighted treewidth resulting in algorithms that use super-polynomial space. We introduce the notion of weighted treedepth and use it to refine the framework of de Berg et al. for obtaining polynomial space (with tight running times) on geometric graphs. As a result, we prove that for any fixed dimension d ≥ 2 on intersection graphs of similarly-sized fat objects many well-known graph problems including Independent Set, r-Dominating Set for constant r, Cycle Cover, Hamiltonian Cycle, Hamiltonian Path, Steiner Tree, Connected Vertex Cover, Feedback Vertex Set, and (Connected) Odd Cycle Transversal are solvable in time 2^𝒪(n^{1-1/d}) and within polynomial space.

Cite as

Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, and Saket Saurabh. ETH Tight Algorithms for Geometric Intersection Graphs: Now in Polynomial Space. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.FSTTCS.2021.21,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Inamdar, Tanmay and Saurabh, Saket},
  title =	{{ETH Tight Algorithms for Geometric Intersection Graphs: Now in Polynomial Space}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.21},
  URN =		{urn:nbn:de:0030-drops-155323},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.21},
  annote =	{Keywords: Subexponential Algorithms, Geometric Intersection Graphs, Treedepth, Treewidth}
}
Document
On Fair and Efficient Allocations of Indivisible Public Goods

Authors: Jugal Garg, Pooja Kulkarni, and Aniket Murhekar


Abstract
We study fair allocation of indivisible public goods subject to cardinality (budget) constraints. In this model, we have n agents and m available public goods, and we want to select k ≤ m goods in a fair and efficient manner. We first establish fundamental connections between the models of private goods, public goods, and public decision making by presenting polynomial-time reductions for the popular solution concepts of maximum Nash welfare (MNW) and leximin. These mechanisms are known to provide remarkable fairness and efficiency guarantees in private goods and public decision making settings. We show that they retain these desirable properties even in the public goods case. We prove that MNW allocations provide fairness guarantees of Proportionality up to one good (Prop1), 1/n approximation to Round Robin Share (RRS), and the efficiency guarantee of Pareto Optimality (PO). Further, we show that the problems of finding MNW or leximin-optimal allocations are NP-hard, even in the case of constantly many agents, or binary valuations. This is in sharp contrast to the private goods setting that admits polynomial-time algorithms under binary valuations. We also design pseudo-polynomial time algorithms for computing an exact MNW or leximin-optimal allocation for the cases of (i) constantly many agents, and (ii) constantly many goods with additive valuations. We also present an O(n)-factor approximation algorithm for MNW which also satisfies RRS, Prop1, and 1/2-Prop.

Cite as

Jugal Garg, Pooja Kulkarni, and Aniket Murhekar. On Fair and Efficient Allocations of Indivisible Public Goods. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.FSTTCS.2021.22,
  author =	{Garg, Jugal and Kulkarni, Pooja and Murhekar, Aniket},
  title =	{{On Fair and Efficient Allocations of Indivisible Public Goods}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.22},
  URN =		{urn:nbn:de:0030-drops-155331},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.22},
  annote =	{Keywords: Public goods, Nash welfare, Leximin, Proportionality}
}
Document
Time Space Optimal Algorithm for Computing Separators in Bounded Genus Graphs

Authors: Chetan Gupta, Rahul Jain, and Raghunath Tewari


Abstract
A graph separator is a subset of vertices of a graph whose removal divides the graph into small components. Computing small graph separators for various classes of graphs is an important computational task. In this paper, we present a polynomial-time algorithm that uses O(g^{1/2} n^{1/2} log n)-space to find an O(g^{1/2} n^{1/2})-sized separator of a graph having n vertices and embedded on an orientable surface of genus g.

Cite as

Chetan Gupta, Rahul Jain, and Raghunath Tewari. Time Space Optimal Algorithm for Computing Separators in Bounded Genus Graphs. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.FSTTCS.2021.23,
  author =	{Gupta, Chetan and Jain, Rahul and Tewari, Raghunath},
  title =	{{Time Space Optimal Algorithm for Computing Separators in Bounded Genus Graphs}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.23},
  URN =		{urn:nbn:de:0030-drops-155344},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.23},
  annote =	{Keywords: Graph algorithms, space-bounded algorithms, surface embedded graphs, reachability, Euler genus, algorithmic graph theory, computational complexity theory}
}
Document
Near-Optimal Cayley Expanders for Abelian Groups

Authors: Akhil Jalan and Dana Moshkovitz


Abstract
We give an efficient deterministic algorithm that outputs an expanding generating set for any finite abelian group. The size of the generating set is close to the randomized construction of Alon and Roichman [Alon and Roichman, 1994], improving upon various deterministic constructions in both the dependence on the dimension and the spectral gap. By obtaining optimal dependence on the dimension we resolve a conjecture of Azar, Motwani, and Naor [Azar et al., 1998] in the affirmative. Our technique is an extension of the bias amplification technique of Ta-Shma [Ta-Shma, 2017], who used random walks on expanders to obtain expanding generating sets over the additive group of 𝔽₂ⁿ. As a consequence, we obtain (i) randomness-efficient constructions of almost k-wise independent variables, (ii) a faster deterministic algorithm for the Remote Point Problem, (iii) randomness-efficient low-degree tests, and (iv) randomness-efficient verification of matrix multiplication.

Cite as

Akhil Jalan and Dana Moshkovitz. Near-Optimal Cayley Expanders for Abelian Groups. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 24:1-24:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jalan_et_al:LIPIcs.FSTTCS.2021.24,
  author =	{Jalan, Akhil and Moshkovitz, Dana},
  title =	{{Near-Optimal Cayley Expanders for Abelian Groups}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{24:1--24:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.24},
  URN =		{urn:nbn:de:0030-drops-155359},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.24},
  annote =	{Keywords: Cayley graphs, Expander walks, Epsilon-biased sets, Derandomization}
}
Document
Matchings, Critical Nodes, and Popular Solutions

Authors: Telikepalli Kavitha


Abstract
We consider a matching problem in a marriage instance G. Every node has a strict preference order ranking its neighbors. There is a set C of prioritized or critical nodes and we are interested in only those matchings that match as many critical nodes as possible. Such matchings are useful in several applications and we call them critical matchings. A stable matching need not be critical. We consider a well-studied relaxation of stability called popularity. Our goal is to find a popular critical matching, i.e., a weak Condorcet winner within the set of critical matchings where nodes are voters. We show that popular critical matchings always exist in G and min-size/max-size such matchings can be efficiently computed.

Cite as

Telikepalli Kavitha. Matchings, Critical Nodes, and Popular Solutions. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kavitha:LIPIcs.FSTTCS.2021.25,
  author =	{Kavitha, Telikepalli},
  title =	{{Matchings, Critical Nodes, and Popular Solutions}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.25},
  URN =		{urn:nbn:de:0030-drops-155360},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.25},
  annote =	{Keywords: Bipartite graphs, Stable matchings, LP-duality}
}
Document
Fast and Exact Convex Hull Simplification

Authors: Georgiy Klimenko and Benjamin Raichel


Abstract
Given a point set P in the plane, we seek a subset Q ⊆ P, whose convex hull gives a smaller and thus simpler representation of the convex hull of P. Specifically, let cost(Q,P) denote the Hausdorff distance between the convex hulls CH(Q) and CH(P). Then given a value ε > 0 we seek the smallest subset Q ⊆ P such that cost(Q,P) ≤ ε. We also consider the dual version, where given an integer k, we seek the subset Q ⊆ P which minimizes cost(Q,P), such that |Q| ≤ k. For these problems, when P is in convex position, we respectively give an O(n log²n) time algorithm and an O(n log³n) time algorithm, where the latter running time holds with high probability. When there is no restriction on P, we show the problem can be reduced to APSP in an unweighted directed graph, yielding an O(n^2.5302) time algorithm when minimizing k and an O(min{n^2.5302, kn^2.376}) time algorithm when minimizing ε, using prior results for APSP. Finally, we show our near linear algorithms for convex position give 2-approximations for the general case.

Cite as

Georgiy Klimenko and Benjamin Raichel. Fast and Exact Convex Hull Simplification. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{klimenko_et_al:LIPIcs.FSTTCS.2021.26,
  author =	{Klimenko, Georgiy and Raichel, Benjamin},
  title =	{{Fast and Exact Convex Hull Simplification}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.26},
  URN =		{urn:nbn:de:0030-drops-155373},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.26},
  annote =	{Keywords: Convex hull, coreset, exact algorithm}
}
Document
Lower Bounds and Improved Algorithms for Asymmetric Streaming Edit Distance and Longest Common Subsequence

Authors: Xin Li and Yu Zheng


Abstract
In this paper, we study edit distance (ED) and longest common subsequence (LCS) in the asymmetric streaming model, introduced by Saks and Seshadhri [Saks and Seshadhri, 2013]. As an intermediate model between the random access model and the streaming model, this model allows one to have streaming access to one string and random access to the other string. Meanwhile, ED and LCS are both fundamental problems that are often studied on large strings, thus the (asymmetric) streaming model is ideal for studying these problems. Our first main contribution is a systematic study of space lower bounds for ED and LCS in the asymmetric streaming model. Previously, there are no explicitly stated results in this context, although some lower bounds about LCS can be inferred from the lower bounds for longest increasing subsequence (LIS) in [Sun and Woodruff, 2007; Gál and Gopalan, 2010; Ergun and Jowhari, 2008]. Yet these bounds only work for large alphabet size. In this paper, we develop several new techniques to handle ED in general and LCS for small alphabet size, thus establishing strong lower bounds for both problems. In particular, our lower bound for ED provides an exponential separation between edit distance and Hamming distance in the asymmetric streaming model. Our lower bounds also extend to LIS and longest non-decreasing subsequence (LNS) in the standard streaming model. Together with previous results, our bounds provide an almost complete picture for these two problems. As our second main contribution, we give improved algorithms for ED and LCS in the asymmetric streaming model. For ED, we improve the space complexity of the constant factor approximation algorithms in [Farhadi et al., 2020; Cheng et al., 2020] from Õ({n^δ}/δ) to O({d^δ}/δ polylog(n)), where n is the length of each string and d is the edit distance between the two strings. For LCS, we give the first 1/2+ε approximation algorithm with space n^δ for any constant δ > 0, over a binary alphabet. Our work leaves a plethora of intriguing open questions, including establishing lower bounds and designing algorithms for a natural generalization of LIS and LNS, which we call longest non-decreasing subsequence with threshold (LNST).

Cite as

Xin Li and Yu Zheng. Lower Bounds and Improved Algorithms for Asymmetric Streaming Edit Distance and Longest Common Subsequence. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 27:1-27:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.FSTTCS.2021.27,
  author =	{Li, Xin and Zheng, Yu},
  title =	{{Lower Bounds and Improved Algorithms for Asymmetric Streaming Edit Distance and Longest Common Subsequence}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{27:1--27:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.27},
  URN =		{urn:nbn:de:0030-drops-155381},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.27},
  annote =	{Keywords: Asymmetric Streaming Model, Edit Distance, Longest Common Subsequence, Space Lower Bound}
}
Document
An ETH-Tight Algorithm for Multi-Team Formation

Authors: Daniel Lokshtanov, Saket Saurabh, Subhash Suri, and Jie Xue


Abstract
In the Multi-Team Formation problem, we are given a ground set C of n candidates, each of which is characterized by a d-dimensional attribute vector in ℝ^d, and two positive integers α and β satisfying α β ≤ n. The goal is to form α disjoint teams T₁,...,T_α ⊆ C, each of which consists of β candidates in C, such that the total score of the teams is maximized, where the score of a team T is the sum of the h_j maximum values of the j-th attributes of the candidates in T, for all j ∈ {1,...,d}. Our main result is an 2^{2^O(d)} n^O(1)-time algorithm for Multi-Team Formation. This bound is ETH-tight since a 2^{2^{d/c}} n^O(1)-time algorithm for any constant c > 12 can be shown to violate the Exponential Time Hypothesis (ETH). Our algorithm runs in polynomial time for all dimensions up to d = clog log n for a sufficiently small constant c > 0. Prior to our work, the existence of a polynomial time algorithm was an open problem even for d = 3.

Cite as

Daniel Lokshtanov, Saket Saurabh, Subhash Suri, and Jie Xue. An ETH-Tight Algorithm for Multi-Team Formation. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 28:1-28:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.FSTTCS.2021.28,
  author =	{Lokshtanov, Daniel and Saurabh, Saket and Suri, Subhash and Xue, Jie},
  title =	{{An ETH-Tight Algorithm for Multi-Team Formation}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{28:1--28:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.28},
  URN =		{urn:nbn:de:0030-drops-155391},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.28},
  annote =	{Keywords: Team formation, Parameterized algorithms, Exponential Time Hypothesis}
}
Document
Dominating Set in Weakly Closed Graphs is Fixed Parameter Tractable

Authors: Daniel Lokshtanov and Vaishali Surianarayanan


Abstract
In the Dominating Set problem the input is a graph G and an integer k, the task is to determine whether there exists a vertex set S of size at most k so that every vertex not in S has at least one neighbor in S. We consider the parameterized complexity of the Dominating Set problem, parameterized by the solution size k, and the weak closure of the input graph G. Weak closure of graphs was recently introduced by Fox et al. [SIAM J. Comp. 2020 ] and captures sparseness and triadic closure properties found in real world graphs. A graph G is weakly c-closed if for every induced subgraph G' of G, there exists a vertex v ∈ V(G') such that every vertex u in V(G') which is non-adjacent to v has less than c common neighbors with v. The weak closure of G is the smallest integer γ such that G is weakly γ-closed. We give an algorithm for Dominating Set with running time k^O(γ² k³) n^O(1), resolving an open problem of Koana et al. [ISAAC 2020]. One of the ingredients of our algorithm is a proof that the VC-dimension of (the set system defined by the closed neighborhoods of the vertices of) a weakly γ-closed graph is upper bounded by 6γ. This result may find further applications in the study of weakly closed graphs.

Cite as

Daniel Lokshtanov and Vaishali Surianarayanan. Dominating Set in Weakly Closed Graphs is Fixed Parameter Tractable. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 29:1-29:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lokshtanov_et_al:LIPIcs.FSTTCS.2021.29,
  author =	{Lokshtanov, Daniel and Surianarayanan, Vaishali},
  title =	{{Dominating Set in Weakly Closed Graphs is Fixed Parameter Tractable}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{29:1--29:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.29},
  URN =		{urn:nbn:de:0030-drops-155404},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.29},
  annote =	{Keywords: Dominating Set, Weakly Closed Graphs, FPT, Domination Cores, VC-dimension}
}
Document
Popular Matchings in the Hospital-Residents Problem with Two-Sided Lower Quotas

Authors: Meghana Nasre, Prajakta Nimbhorkar, Keshav Ranjan, and Ankita Sarkar


Abstract
We consider the hospital-residents problem where both hospitals and residents can have lower quotas. The input is a bipartite graph G = (ℛ∪ℋ,E), each vertex in ℛ∪ℋ has a strict preference ordering over its neighbors. The sets ℛ and ℋ denote the sets of residents and hospitals respectively. Each hospital has an upper and a lower quota denoting the maximum and minimum number of residents that can be assigned to it. Residents have upper quota equal to one, however, there may be a requirement that some residents must not be left unassigned in the output matching. We call this as the residents' lower quota. We show that whenever the set of matchings satisfying all the lower and upper quotas is non-empty, there always exists a matching that is popular among the matchings in this set. We give a polynomial-time algorithm to compute such a matching.

Cite as

Meghana Nasre, Prajakta Nimbhorkar, Keshav Ranjan, and Ankita Sarkar. Popular Matchings in the Hospital-Residents Problem with Two-Sided Lower Quotas. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{nasre_et_al:LIPIcs.FSTTCS.2021.30,
  author =	{Nasre, Meghana and Nimbhorkar, Prajakta and Ranjan, Keshav and Sarkar, Ankita},
  title =	{{Popular Matchings in the Hospital-Residents Problem with Two-Sided Lower Quotas}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{30:1--30:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.30},
  URN =		{urn:nbn:de:0030-drops-155419},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.30},
  annote =	{Keywords: Matching, Popularity, Lower quota, Preferences}
}
Document
Property B: Two-Coloring Non-Uniform Hypergraphs

Authors: Jaikumar Radhakrishnan and Aravind Srinivasan


Abstract
The following is a classical question of Erdős (Nordisk Matematisk Tidskrift, 1963) and of Erdős and Lovász (Colloquia Mathematica Societatis János Bolyai, vol. 10, 1975). Given a hypergraph ℱ with minimum edge-size k, what is the largest function g(k) such that if the expected number of monochromatic edges in ℱ is at most g(k) when the vertices of ℱ are colored red and blue randomly and independently, then we are guaranteed that ℱ is two-colorable? Duraj, Gutowski and Kozik (ICALP 2018) have shown that g(k) ≥ Ω(log k). On the other hand, if ℱ is k-uniform, the lower bound on g(k) is much higher: g(k) ≥ Ω(√{k / log k}) (Radhakrishnan and Srinivasan, Rand. Struct. Alg., 2000). In order to bridge this gap, we define a family of locally-almost-uniform hypergraphs, for which we show, via the randomized algorithm of Cherkashin and Kozik (Rand. Struct. Alg., 2015), that g(k) can be much higher than Ω(log k), e.g., 2^Ω(√{log k}) under suitable conditions.

Cite as

Jaikumar Radhakrishnan and Aravind Srinivasan. Property B: Two-Coloring Non-Uniform Hypergraphs. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 31:1-31:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{radhakrishnan_et_al:LIPIcs.FSTTCS.2021.31,
  author =	{Radhakrishnan, Jaikumar and Srinivasan, Aravind},
  title =	{{Property B: Two-Coloring Non-Uniform Hypergraphs}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{31:1--31:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.31},
  URN =		{urn:nbn:de:0030-drops-155428},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.31},
  annote =	{Keywords: Hypergraph coloring, Propery B}
}
Document
Harmonic Algorithms for Packing d-Dimensional Cuboids into Bins

Authors: Eklavya Sharma


Abstract
We explore approximation algorithms for the d-dimensional geometric bin packing problem (dBP). Caprara [Caprara, 2008] gave a harmonic-based algorithm for dBP having an asymptotic approximation ratio (AAR) of (T_∞)^{d-1} (where T_∞ ≈ 1.691). However, their algorithm doesn't allow items to be rotated. This is in contrast to some common applications of dBP, like packing boxes into shipping containers. We give approximation algorithms for dBP when items can be orthogonally rotated about all or a subset of axes. We first give a fast and simple harmonic-based algorithm having AAR T_∞^d. We next give a more sophisticated harmonic-based algorithm, which we call HGaP_k, having AAR (T_∞)^{d-1}(1+ε). This gives an AAR of roughly 2.860 + ε for 3BP with rotations, which improves upon the best-known AAR of 4.5. In addition, we study the multiple-choice bin packing problem that generalizes the rotational case. Here we are given n sets of d-dimensional cuboidal items and we have to choose exactly one item from each set and then pack the chosen items. Our algorithms also work for the multiple-choice bin packing problem. We also give fast and simple approximation algorithms for the multiple-choice versions of dD strip packing and dD geometric knapsack.

Cite as

Eklavya Sharma. Harmonic Algorithms for Packing d-Dimensional Cuboids into Bins. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 32:1-32:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{sharma:LIPIcs.FSTTCS.2021.32,
  author =	{Sharma, Eklavya},
  title =	{{Harmonic Algorithms for Packing d-Dimensional Cuboids into Bins}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{32:1--32:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.32},
  URN =		{urn:nbn:de:0030-drops-155432},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.32},
  annote =	{Keywords: Geometric bin packing}
}
Document
Resilience of Timed Systems

Authors: S. Akshay, Blaise Genest, Loïc Hélouët, S. Krishna, and Sparsa Roychowdhury


Abstract
This paper addresses reliability of timed systems in the setting of resilience, that considers the behaviors of a system when unspecified timing errors such as missed deadlines occur. Given a fault model that allows transitions to fire later than allowed by their guard, a system is universally resilient (or self-resilient) if after a fault, it always returns to a timed behavior of the non-faulty system. It is existentially resilient if after a fault, there exists a way to return to a timed behavior of the non-faulty system, that is, if there exists a controller which can guide the system back to a normal behavior. We show that universal resilience of timed automata is undecidable, while existential resilience is decidable, in EXPSPACE. To obtain better complexity bounds and decidability of universal resilience, we consider untimed resilience, as well as subclasses of timed automata.

Cite as

S. Akshay, Blaise Genest, Loïc Hélouët, S. Krishna, and Sparsa Roychowdhury. Resilience of Timed Systems. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 33:1-33:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.FSTTCS.2021.33,
  author =	{Akshay, S. and Genest, Blaise and H\'{e}lou\"{e}t, Lo\"{i}c and Krishna, S. and Roychowdhury, Sparsa},
  title =	{{Resilience of Timed Systems}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{33:1--33:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-155442},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.33},
  annote =	{Keywords: Timed automata, Fault tolerance, Integer-resets, Resilience}
}
Document
On the Complexity of Intersection Non-emptiness for Star-Free Language Classes

Authors: Emmanuel Arrighi, Henning Fernau, Stefan Hoffmann, Markus Holzer, Ismaël Jecker, Mateus de Oliveira Oliveira, and Petra Wolf


Abstract
In the Intersection Non-emptiness problem, we are given a list of finite automata A_1, A_2,… , A_m over a common alphabet Σ as input, and the goal is to determine whether some string w ∈ Σ^* lies in the intersection of the languages accepted by the automata in the list. We analyze the complexity of the Intersection Non-emptiness problem under the promise that all input automata accept a language in some level of the dot-depth hierarchy, or some level of the Straubing-Thérien hierarchy. Automata accepting languages from the lowest levels of these hierarchies arise naturally in the context of model checking. We identify a dichotomy in the dot-depth hierarchy by showing that the problem is already NP-complete when all input automata accept languages of the levels B_0 or B_{1/2} and already PSPACE-hard when all automata accept a language from the level B_1. Conversely, we identify a tetrachotomy in the Straubing-Thérien hierarchy. More precisely, we show that the problem is in AC^0 when restricted to level L_0; complete for L or NL, depending on the input representation, when restricted to languages in the level L_{1/2}; NP-complete when the input is given as DFAs accepting a language in L_1 or L_{3/2}; and finally, PSPACE-complete when the input automata accept languages in level L_2 or higher. Moreover, we show that the proof technique used to show containment in NP for DFAs accepting languages in L_1 or L_{3/2} does not generalize to the context of NFAs. To prove this, we identify a family of languages that provide an exponential separation between the state complexity of general NFAs and that of partially ordered NFAs. To the best of our knowledge, this is the first superpolynomial separation between these two models of computation.

Cite as

Emmanuel Arrighi, Henning Fernau, Stefan Hoffmann, Markus Holzer, Ismaël Jecker, Mateus de Oliveira Oliveira, and Petra Wolf. On the Complexity of Intersection Non-emptiness for Star-Free Language Classes. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arrighi_et_al:LIPIcs.FSTTCS.2021.34,
  author =	{Arrighi, Emmanuel and Fernau, Henning and Hoffmann, Stefan and Holzer, Markus and Jecker, Isma\"{e}l and de Oliveira Oliveira, Mateus and Wolf, Petra},
  title =	{{On the Complexity of Intersection Non-emptiness for Star-Free Language Classes}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.34},
  URN =		{urn:nbn:de:0030-drops-155456},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.34},
  annote =	{Keywords: Intersection Non-emptiness Problem, Star-Free Languages, Straubing-Th\'{e}rien Hierarchy, dot-depth Hierarchy, Commutative Languages, Complexity}
}
Document
Complexity of Coverability in Bounded Path Broadcast Networks

Authors: A. R. Balasubramanian


Abstract
Broadcast networks are a formalism of distributed computation that allow one to model networks of identical nodes communicating through message broadcasts over a communication topology that does not change over the course of executions. The parameterized verification problem for these networks amounts to proving correctness of a property for any number of nodes, and on all executions. Dually speaking, this problem asks for the existence of an execution of the broadcast network that violates a given property. One specific instance of parameterized verification is the coverability problem which asks whether there is an execution of the network in which some node reaches a given state of the broadcast protocol. This problem was proven to be undecidable by Delzanno, Sangnier and Zavattaro (CONCUR 2010). In the same paper, the authors also prove that, if we additionally assume that the underlying communication topology has a bound on the longest path, then the coverability problem becomes decidable. In this paper, we provide complexity results for the above problem and prove that the coverability problem for bounded-path topologies is 𝐅_ε₀-complete, where 𝐅_ε₀ is a class in the fast-growing hierarchy of complexity classes. This solves an open problem of Hasse, Schmitz and Schnoebelen (LMCS, Vol 10, Issue 4).

Cite as

A. R. Balasubramanian. Complexity of Coverability in Bounded Path Broadcast Networks. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 35:1-35:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{balasubramanian:LIPIcs.FSTTCS.2021.35,
  author =	{Balasubramanian, A. R.},
  title =	{{Complexity of Coverability in Bounded Path Broadcast Networks}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{35:1--35:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.35},
  URN =		{urn:nbn:de:0030-drops-155466},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.35},
  annote =	{Keywords: Parameterized verification, Bounded path networks, Fast-growing complexity classes}
}
Document
On Classical Decidable Logics Extended with Percentage Quantifiers and Arithmetics

Authors: Bartosz Bednarczyk, Maja Orłowska, Anna Pacanowska, and Tony Tan


Abstract
During the last decades, a lot of effort was put into identifying decidable fragments of first-order logic. Such efforts gave birth, among the others, to the two-variable fragment and the guarded fragment, depending on the type of restriction imposed on formulae from the language. Despite the success of the mentioned logics in areas like formal verification and knowledge representation, such first-order fragments are too weak to express even the simplest statistical constraints, required for modelling of influence networks or in statistical reasoning. In this work we investigate the extensions of these classical decidable logics with percentage quantifiers, specifying how frequently a formula is satisfied in the indented model. We show, surprisingly, that all the mentioned decidable fragments become undecidable under such extension, sharpening the existing results in the literature. Our negative results are supplemented by decidability of the two-variable guarded fragment with even more expressive counting, namely Presburger constraints. Our results can be applied to infer decidability of various modal and description logics, e.g. Presburger Modal Logics with Converse or ALCI, with expressive cardinality constraints.

Cite as

Bartosz Bednarczyk, Maja Orłowska, Anna Pacanowska, and Tony Tan. On Classical Decidable Logics Extended with Percentage Quantifiers and Arithmetics. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bednarczyk_et_al:LIPIcs.FSTTCS.2021.36,
  author =	{Bednarczyk, Bartosz and Or{\l}owska, Maja and Pacanowska, Anna and Tan, Tony},
  title =	{{On Classical Decidable Logics Extended with Percentage Quantifiers and Arithmetics}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.36},
  URN =		{urn:nbn:de:0030-drops-155478},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.36},
  annote =	{Keywords: statistical reasoning, knowledge representation, satisfiability, fragments of first-order logic, guarded fragment, two-variable fragment, (un)decidability}
}
Document
Branching Automata and Pomset Automata

Authors: Nicolas Bedon


Abstract
We compare, in terms of expressive power, two notions of automata recognizing finite N-free pomsets: branching automata by Lodaya and Weil [Lodaya and Weil, 1998; Lodaya and Weil, 1998; Lodaya and Weil, 2000; Lodaya and Weil, 2001] and pomset automata by Kappé, Brunet, Luttik, Silva and Zanasi [Kappé et al., 2018]. In the general case, they are equivalent. We also consider sub-classes of both kind of automata that we prove equivalent.

Cite as

Nicolas Bedon. Branching Automata and Pomset Automata. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 37:1-37:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bedon:LIPIcs.FSTTCS.2021.37,
  author =	{Bedon, Nicolas},
  title =	{{Branching Automata and Pomset Automata}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{37:1--37:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.37},
  URN =		{urn:nbn:de:0030-drops-155486},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.37},
  annote =	{Keywords: Finite N-free Pomsets, Finite Series-Parallel Pomsets, Branching Automata, Pomset Automata, Series-Parallel Rational Languages}
}
Document
History Determinism vs. Good for Gameness in Quantitative Automata

Authors: Udi Boker and Karoliina Lehtinen


Abstract
Automata models between determinism and nondeterminism/alternations can retain some of the algorithmic properties of deterministic automata while enjoying some of the expressiveness and succinctness of nondeterminism. We study three closely related such models - history determinism, good for gameness and determinisability by pruning - on quantitative automata. While in the Boolean setting, history determinism and good for gameness coincide, we show that this is no longer the case in the quantitative setting: good for gameness is broader than history determinism, and coincides with a relaxed version of it, defined with respect to thresholds. We further identify criteria in which history determinism, which is generally broader than determinisability by pruning, coincides with it, which we then apply to typical quantitative automata types. As a key application of good for games and history deterministic automata is synthesis, we clarify the relationship between the two notions and various quantitative synthesis problems. We show that good-for-games automata are central for "global" (classical) synthesis, while "local" (good-enough) synthesis reduces to deciding whether a nondeterministic automaton is history deterministic.

Cite as

Udi Boker and Karoliina Lehtinen. History Determinism vs. Good for Gameness in Quantitative Automata. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 38:1-38:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{boker_et_al:LIPIcs.FSTTCS.2021.38,
  author =	{Boker, Udi and Lehtinen, Karoliina},
  title =	{{History Determinism vs. Good for Gameness in Quantitative Automata}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{38:1--38:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.38},
  URN =		{urn:nbn:de:0030-drops-155495},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.38},
  annote =	{Keywords: Good for games, history determinism, alternation, quantitative automata}
}
Document
Local First-Order Logic with Two Data Values

Authors: Benedikt Bollig, Arnaud Sangnier, and Olivier Stietel


Abstract
We study first-order logic over unordered structures whose elements carry two data values from an infinite domain. Data values can be compared wrt. equality so that the formalism is suitable to specify the input-output behavior of various distributed algorithms. As the logic is undecidable in general, we introduce a family of local fragments that restrict quantification to neighborhoods of a given reference point. Our main result establishes decidability of the satisfiability problem for one of these non-trivial local fragments. On the other hand, already slightly more general local logics turn out to be undecidable. Altogether, we draw a landscape of formalisms that are suitable for the specification of systems with data and open up new avenues for future research.

Cite as

Benedikt Bollig, Arnaud Sangnier, and Olivier Stietel. Local First-Order Logic with Two Data Values. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 39:1-39:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bollig_et_al:LIPIcs.FSTTCS.2021.39,
  author =	{Bollig, Benedikt and Sangnier, Arnaud and Stietel, Olivier},
  title =	{{Local First-Order Logic with Two Data Values}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{39:1--39:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.39},
  URN =		{urn:nbn:de:0030-drops-155508},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.39},
  annote =	{Keywords: first-order logic, data values, specification of distributed algorithms}
}
Document
Diagrammatic Polyhedral Algebra

Authors: Filippo Bonchi, Alessandro Di Giorgio, and Paweł Sobociński


Abstract
We extend the theory of Interacting Hopf algebras with an order primitive, and give a sound and complete axiomatisation of the prop of polyhedral cones. Next, we axiomatise an affine extension and prove soundness and completeness for the prop of polyhedra.

Cite as

Filippo Bonchi, Alessandro Di Giorgio, and Paweł Sobociński. Diagrammatic Polyhedral Algebra. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 40:1-40:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bonchi_et_al:LIPIcs.FSTTCS.2021.40,
  author =	{Bonchi, Filippo and Di Giorgio, Alessandro and Soboci\'{n}ski, Pawe{\l}},
  title =	{{Diagrammatic Polyhedral Algebra}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{40:1--40:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.40},
  URN =		{urn:nbn:de:0030-drops-155511},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.40},
  annote =	{Keywords: String diagrams, Polyhedral cones, Polyhedra}
}
Document
From Local to Global Determinacy in Concurrent Graph Games

Authors: Benjamin Bordais, Patricia Bouyer, and Stéphane Le Roux


Abstract
In general, finite concurrent two-player reachability games are only determined in a weak sense: the supremum probability to win can be approached via stochastic strategies, but cannot be realized. We introduce a class of concurrent games that are determined in a much stronger sense, and in a way, it is the largest class with this property. To this end, we introduce the notion of local interaction at a state of a graph game: it is a game form whose outcomes (i.e. a table whose entries) are the next states, which depend on the concurrent actions of the players. By definition, a game form is determined iff it always yields games that are determined via deterministic strategies when used as a local interaction in a Nature-free, one-shot reachability game. We show that if all the local interactions of a graph game with Borel objective are determined game forms, the game itself is determined: if Nature does not play, one player has a winning strategy; if Nature plays, both players have deterministic strategies that maximize the probability to win. This constitutes a clear-cut separation: either a game form behaves poorly already when used alone with basic objectives, or it behaves well even when used together with other well-behaved game forms and complex objectives. Existing results for positional and finite-memory determinacy in turn-based games are extended this way to concurrent games with determined local interactions (CG-DLI).

Cite as

Benjamin Bordais, Patricia Bouyer, and Stéphane Le Roux. From Local to Global Determinacy in Concurrent Graph Games. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bordais_et_al:LIPIcs.FSTTCS.2021.41,
  author =	{Bordais, Benjamin and Bouyer, Patricia and Le Roux, St\'{e}phane},
  title =	{{From Local to Global Determinacy in Concurrent Graph Games}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.41},
  URN =		{urn:nbn:de:0030-drops-155522},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.41},
  annote =	{Keywords: Concurrent games, Game forms, Local interaction}
}
Document
Quantitative Verification on Product Graphs of Small Treewidth

Authors: Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Andreas Pavlogiannis


Abstract
Product graphs arise naturally in formal verification and program analysis. For example, the analysis of two concurrent threads requires the product of two component control-flow graphs, and for language inclusion of deterministic automata the product of two automata is constructed. In many cases, the component graphs have constant treewidth, e.g., when the input contains control-flow graphs of programs. We consider the algorithmic analysis of products of two constant-treewidth graphs with respect to three classic specification languages, namely, (a) algebraic properties, (b) mean-payoff properties, and (c) initial credit for energy properties. Our main contributions are as follows. Consider a graph G that is the product of two constant-treewidth graphs of size n each. First, given an idempotent semiring, we present an algorithm that computes the semiring transitive closure of G in time Õ(n⁴). Since the output has size Θ(n⁴), our algorithm is optimal (up to polylog factors). Second, given a mean-payoff objective, we present an O(n³)-time algorithm for deciding whether the value of a starting state is non-negative, improving the previously known O(n⁴) bound. Third, given an initial credit for energy objective, we present an O(n⁵)-time algorithm for computing the minimum initial credit for all nodes of G, improving the previously known O(n⁸) bound. At the heart of our approach lies an algorithm for the efficient construction of strongly-balanced tree decompositions of constant-treewidth graphs. Given a constant-treewidth graph G' of n nodes and a positive integer λ, our algorithm constructs a binary tree decomposition of G' of width O(λ) with the property that the size of each subtree decreases geometrically with rate (1/2 + 2^{-λ}).

Cite as

Krishnendu Chatterjee, Rasmus Ibsen-Jensen, and Andreas Pavlogiannis. Quantitative Verification on Product Graphs of Small Treewidth. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 42:1-42:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chatterjee_et_al:LIPIcs.FSTTCS.2021.42,
  author =	{Chatterjee, Krishnendu and Ibsen-Jensen, Rasmus and Pavlogiannis, Andreas},
  title =	{{Quantitative Verification on Product Graphs of Small Treewidth}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{42:1--42:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.42},
  URN =		{urn:nbn:de:0030-drops-155533},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.42},
  annote =	{Keywords: graph algorithms, algebraic paths, mean-payoff, initial credit for energy}
}
Document
Synthesizing Computable Functions from Rational Specifications over Infinite Words

Authors: Emmanuel Filiot and Sarah Winter


Abstract
The synthesis problem asks to automatically generate, if it exists, an algorithm from a specification of correct input-output pairs. In this paper, we consider the synthesis of computable functions of infinite words, for a classical Turing computability notion over infinite inputs. We consider specifications which are rational relations of infinite words, i.e., specifications defined by non-deterministic parity transducers. We prove that the synthesis problem of computable functions from rational specifications is undecidable. We provide an incomplete but sound reduction to some parity game, such that if Eve wins the game, then the rational specification is realizable by a computable function. We prove that this function is even computable by a deterministic two-way transducer. We provide a sufficient condition under which the latter game reduction is complete. This entails the decidability of the synthesis problem of computable functions, which we proved to be ExpTime-complete, for a large subclass of rational specifications, namely deterministic rational specifications. This subclass contains the class of automatic relations over infinite words, a yardstick in reactive synthesis.

Cite as

Emmanuel Filiot and Sarah Winter. Synthesizing Computable Functions from Rational Specifications over Infinite Words. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 43:1-43:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{filiot_et_al:LIPIcs.FSTTCS.2021.43,
  author =	{Filiot, Emmanuel and Winter, Sarah},
  title =	{{Synthesizing Computable Functions from Rational Specifications over Infinite Words}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{43:1--43:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.43},
  URN =		{urn:nbn:de:0030-drops-155541},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.43},
  annote =	{Keywords: uniformization, synthesis, transducers, continuity, computability}
}
Document
Confluence of Conditional Rewriting in Logic Form

Authors: Raúl Gutiérrez, Salvador Lucas, and Miguel Vítores


Abstract
We characterize conditional rewriting as satisfiability in a Herbrand-like model of terms where variables are also included as fresh constant symbols extending the original signature. Confluence of conditional rewriting and joinability of conditional critical pairs is characterized similarly. Joinability of critical pairs is then translated into combinations of (in)feasibility problems which can be efficiently handled by a number of automatic tools. This permits a more efficient use of standard results for proving confluence of conditional term rewriting systems, most of them relying on auxiliary proofs of joinability of conditional critical pairs, perhaps with additional syntactical and (operational) termination requirements on the system. Our approach has been implemented in a new system: CONFident . Its ability to (dis)prove confluence of conditional term rewriting systems is witnessed by means of some benchmarks comparing our tool with existing tools for similar purposes.

Cite as

Raúl Gutiérrez, Salvador Lucas, and Miguel Vítores. Confluence of Conditional Rewriting in Logic Form. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gutierrez_et_al:LIPIcs.FSTTCS.2021.44,
  author =	{Guti\'{e}rrez, Ra\'{u}l and Lucas, Salvador and V{\'\i}tores, Miguel},
  title =	{{Confluence of Conditional Rewriting in Logic Form}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.44},
  URN =		{urn:nbn:de:0030-drops-155553},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.44},
  annote =	{Keywords: Confluence, Program analysis, Rewriting-based systems}
}
Document
On the Expressive Equivalence of TPTL in the Pointwise and Continuous Semantics

Authors: Raveendra Holla, Nabarun Deka, and Deepak D'Souza


Abstract
We consider a first-order logic with linear constraints interpreted in a pointwise and continuous manner over timed words. We show that the two interpretations of this logic coincide in terms of expressiveness, via an effective transformation of sentences from one logic to the other. As a consequence it follows that the pointwise and continuous semantics of the logic TPTL with the since operator also coincide. Along the way we exhibit a useful normal form for sentences in these logics.

Cite as

Raveendra Holla, Nabarun Deka, and Deepak D'Souza. On the Expressive Equivalence of TPTL in the Pointwise and Continuous Semantics. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 45:1-45:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{holla_et_al:LIPIcs.FSTTCS.2021.45,
  author =	{Holla, Raveendra and Deka, Nabarun and D'Souza, Deepak},
  title =	{{On the Expressive Equivalence of TPTL in the Pointwise and Continuous Semantics}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{45:1--45:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.45},
  URN =		{urn:nbn:de:0030-drops-155562},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.45},
  annote =	{Keywords: Real-Time Logics, First-Order Logics}
}
Document
Separating Regular Languages over Infinite Words with Respect to the Wagner Hierarchy

Authors: Christopher Hugenroth


Abstract
We investigate the separation problem for regular ω-languages with respect to the Wagner hierarchy where the input languages are given as deterministic Muller automata (DMA). We show that a minimal separating DMA can be computed in exponential time and that some languages require separators of exponential size. Further, we show that in this setting it can be decided in polynomial time whether a separator exists on a certain level of the Wagner hierarchy and that emptiness of the intersection of two languages given by DMAs can be decided in polynomial time. Finally, we show that separation can also be decided in polynomial time if the input languages are given as deterministic parity automata.

Cite as

Christopher Hugenroth. Separating Regular Languages over Infinite Words with Respect to the Wagner Hierarchy. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hugenroth:LIPIcs.FSTTCS.2021.46,
  author =	{Hugenroth, Christopher},
  title =	{{Separating Regular Languages over Infinite Words with Respect to the Wagner Hierarchy}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.46},
  URN =		{urn:nbn:de:0030-drops-155574},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.46},
  annote =	{Keywords: Separation, Regular, Wagner Hierarchy, Muller Automata, Parity Automata, Product Automata, Membership}
}
Document
Normal Sequences with Non-Maximal Automatic Complexity

Authors: Liam Jordon and Philippe Moser


Abstract
This paper examines Automatic Complexity, a complexity notion introduced by Shallit and Wang in 2001 [Jeffrey O. Shallit and Ming-wei Wang, 2001]. We demonstrate that there exists a normal sequence T such that I(T) = 0 and S(T) ≤ 1/2, where I(T) and S(T) are the lower and upper automatic complexity rates of T respectively. We furthermore show that there exists a Champernowne sequence C, i.e. a sequence formed by concatenating all strings of length one followed by concatenating all strings of length two and so on, such that S(C) ≤ 2/3.

Cite as

Liam Jordon and Philippe Moser. Normal Sequences with Non-Maximal Automatic Complexity. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 47:1-47:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jordon_et_al:LIPIcs.FSTTCS.2021.47,
  author =	{Jordon, Liam and Moser, Philippe},
  title =	{{Normal Sequences with Non-Maximal Automatic Complexity}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{47:1--47:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.47},
  URN =		{urn:nbn:de:0030-drops-155580},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.47},
  annote =	{Keywords: Automatic Complexity, finite-state complexity, normal sequences, Champernowne sequences, de Bruijn strings, Kolmogorov complexity}
}
Document
Approximate Bisimulation Minimisation

Authors: Stefan Kiefer and Qiyi Tang


Abstract
We propose polynomial-time algorithms to minimise labelled Markov chains whose transition probabilities are not known exactly, have been perturbed, or can only be obtained by sampling. Our algorithms are based on a new notion of an approximate bisimulation quotient, obtained by lumping together states that are exactly bisimilar in a slightly perturbed system. We present experiments that show that our algorithms are able to recover the structure of the bisimulation quotient of the unperturbed system.

Cite as

Stefan Kiefer and Qiyi Tang. Approximate Bisimulation Minimisation. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 48:1-48:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kiefer_et_al:LIPIcs.FSTTCS.2021.48,
  author =	{Kiefer, Stefan and Tang, Qiyi},
  title =	{{Approximate Bisimulation Minimisation}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{48:1--48:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.48},
  URN =		{urn:nbn:de:0030-drops-155599},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.48},
  annote =	{Keywords: Markov chains, Behavioural metrics, Bisimulation}
}
Document
Simple Derivation Systems for Proving Sufficient Completeness of Non-Terminating Term Rewriting Systems

Authors: Kentaro Kikuchi and Takahito Aoto


Abstract
A term rewriting system (TRS) is said to be sufficiently complete when each function yields some value for any input. Proof methods for sufficient completeness of terminating TRSs have been well studied. In this paper, we introduce a simple derivation system for proving sufficient completeness of possibly non-terminating TRSs. The derivation system consists of rules to manipulate a set of guarded terms, and sufficient completeness of a TRS holds if there exists a successful derivation for each function symbol. We also show that variations of the derivation system are useful for proving special cases of local sufficient completeness of TRSs, which is a generalised notion of sufficient completeness.

Cite as

Kentaro Kikuchi and Takahito Aoto. Simple Derivation Systems for Proving Sufficient Completeness of Non-Terminating Term Rewriting Systems. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kikuchi_et_al:LIPIcs.FSTTCS.2021.49,
  author =	{Kikuchi, Kentaro and Aoto, Takahito},
  title =	{{Simple Derivation Systems for Proving Sufficient Completeness of Non-Terminating Term Rewriting Systems}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.49},
  URN =		{urn:nbn:de:0030-drops-155602},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.49},
  annote =	{Keywords: Term rewriting, Sufficient completeness, Local sufficient completeness, Non-termination, Derivation rule, Well-founded induction schema}
}
Document
Parikh Images of Register Automata

Authors: Sławomir Lasota and Mohnish Pattathurajan


Abstract
As it has been recently shown, Parikh images of languages of nondeterministic one-register automata are rational (but not semilinear in general), but it is still open if the property extends to all register automata. We identify a subclass of nondeterministic register automata, called hierarchical register automata (HRA), with the following two properties: every rational language is recognised by a HRA; and Parikh image of the language of every HRA is rational. In consequence, these two properties make HRA an automata-theoretic characterisation of languages of nondeterministic register automata with rational Parikh images.

Cite as

Sławomir Lasota and Mohnish Pattathurajan. Parikh Images of Register Automata. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lasota_et_al:LIPIcs.FSTTCS.2021.50,
  author =	{Lasota, S{\l}awomir and Pattathurajan, Mohnish},
  title =	{{Parikh Images of Register Automata}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.50},
  URN =		{urn:nbn:de:0030-drops-155613},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.50},
  annote =	{Keywords: Sets with atoms, register automata, Parikh images, rational sets, hierarchical register automata}
}
Document
Concrete Categorical Model of a Quantum Circuit Description Language with Measurement

Authors: Dongho Lee, Valentin Perrelle, Benoît Valiron, and Zhaowei Xu


Abstract
In this paper, we introduce dynamic lifting to a quantum circuit-description language, following the Proto-Quipper language approach. Dynamic lifting allows programs to transfer the result of measuring quantum data - qubits - into classical data - booleans -. We propose a type system and an operational semantics for the language and we state safety properties. Next, we introduce a concrete categorical semantics for the proposed language, basing our approach on a recent model from Rios&Selinger for Proto-Quipper-M. Our approach is to construct on top of a concrete category of circuits with measurements a Kleisli category, capturing as a side effect the action of retrieving classical content out of a quantum memory. We then show a soundness result for this semantics.

Cite as

Dongho Lee, Valentin Perrelle, Benoît Valiron, and Zhaowei Xu. Concrete Categorical Model of a Quantum Circuit Description Language with Measurement. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 51:1-51:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.FSTTCS.2021.51,
  author =	{Lee, Dongho and Perrelle, Valentin and Valiron, Beno\^{i}t and Xu, Zhaowei},
  title =	{{Concrete Categorical Model of a Quantum Circuit Description Language with Measurement}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{51:1--51:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.51},
  URN =		{urn:nbn:de:0030-drops-155627},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.51},
  annote =	{Keywords: Categorical semantics, Operational semantics, Quantum circuit description language}
}
Document
Linear-Time Temporal Logic with Team Semantics: Expressivity and Complexity

Authors: Jonni Virtema, Jana Hofmann, Bernd Finkbeiner, Juha Kontinen, and Fan Yang


Abstract
We study the expressivity and complexity of model checking of linear temporal logic with team semantics (TeamLTL). TeamLTL, despite being a purely modal logic, is capable of defining hyperproperties, i.e., properties which relate multiple execution traces. TeamLTL has been introduced quite recently and only few results are known regarding its expressivity and its model checking problem. We relate the expressivity of TeamLTL to logics for hyperproperties obtained by extending LTL with trace and propositional quantifiers (HyperLTL and HyperQPTL). By doing so, we obtain a number of model checking results for TeamLTL and identify its undecidability frontier. In particular, we show decidability of model checking of the so-called left-flat fragment of any downward closed TeamLTL -extension. Moreover, we establish that the model checking problem of TeamLTL with Boolean disjunction and inclusion atoms is undecidable.

Cite as

Jonni Virtema, Jana Hofmann, Bernd Finkbeiner, Juha Kontinen, and Fan Yang. Linear-Time Temporal Logic with Team Semantics: Expressivity and Complexity. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 52:1-52:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{virtema_et_al:LIPIcs.FSTTCS.2021.52,
  author =	{Virtema, Jonni and Hofmann, Jana and Finkbeiner, Bernd and Kontinen, Juha and Yang, Fan},
  title =	{{Linear-Time Temporal Logic with Team Semantics: Expressivity and Complexity}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{52:1--52:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.52},
  URN =		{urn:nbn:de:0030-drops-155634},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.52},
  annote =	{Keywords: Linear temporal logic, Hyperproperties, Model Checking, Expressivity}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail