Dagstuhl Seminar 26221
Knowledge Compilation in Artificial Intelligence, Databases, and Formal Methods
( May 25 – May 29, 2026 )
Permalink
Organizers
- Marcelo Arenas (PUC - Santiago de Chile, CL)
- Supratik Chakraborty (Indian Institute of Technology Bombay - Mumbai, IN)
- YooJung Choi (Arizona State University - Tempe, US)
- Stefan Mengel (CNRS, CRIL - Lens, FR)
Contact
- Andreas Dolzmann (for scientific matters)
- Christina Schwarz (for administrative matters)
Knowledge compilation was conceived as a sub-area of (symbolic) artificial intelligence, as a preprocessing approach to hard reasoning problems. Specifically, in settings where a part of the input to the problem, generally encoded as a knowledge base, had to be queried repeatedly under resource constraints, the vision was to preprocess, or compile, this input into a form that allowed efficient reasoning queries. A typical application area where this is useful is that of configuration exploration, wherein the whole configuration space is compiled into a format that makes it possible for the user to navigate it efficiently. It became clear to early researchers in the area that the central question was that of suitable data structures and representations that serve as the language for the compilation target. Thus, for the last roughly 25 years, knowledge compilation has been mostly concerned with the systematic study of encoding different forms of knowledge or data, e.g. Boolean knowledge bases, probability spaces, or representations of preferences. It became clear early on that in the spectrum of knowledge representation forms, there is an inherent tension between succinctness and computational efficiency of reasoning. On the one hand, there are succinct languages like unrestricted Boolean formulas and circuits that are generally hard to reason about. At the other end of the spectrum, one can encode knowledge by a value table, which is extremely verbose, but makes reasoning for most problems very easy. As a consequence of this, starting with groundbreaking work by Darwiche and Marquis (Journal of Artificial Intelligence Research, 2002), research on knowledge compilation aims to systematically investigate points between these extremes, by proposing and analyzing innovative data structures. The most well-known of these are Ordered Binary Decision Diagrams (OBDDs) and circuits in Decomposable Negation Normal Form (DNNF), although there are many variants of these representation forms that have been studied intensively.
Since the trade-off between succinctness of data structures and efficiency of reasoning over them is important in many areas beyond symbolic artificial intelligence, data structures similar to those studied in knowledge compilation have been studied in other fields like databases, formal methods and theoretical computer science, either independently or adopted from the knowledge compilation literature. Thus, knowledge compilation has, over the last few decades, become a considerably larger field with applications in many areas that, while loosely connected by their use of similar data structures, are otherwise quite disparate.
The aim of this Dagstuhl Seminar is to bring together researchers from different subfields of computer science that study or use techniques in knowledge compilation. In particular, we will discuss related sub-areas:
- Core Questions of Knowledge Compilation: As discussed above, the core questions of knowledge compilation relate to if and how different forms of knowledge can be compiled into specific representations like OBDDs and different variants of DNNFs. These questions will be addressed both from a theoretical and a practical perspective.
- Machine Learning: Theory and techniques from knowledge compilation have enabled several recent developments in machine learning (ML). For example, probabilistic circuits (PCs), which were introduced as a unifying framework of many tractable probabilistic models, are circuit languages that represent probability distributions whose trade-off between succinctness and tractable probabilistic reasoning capability can be characterized by circuit properties which have been directly adopted from knowledge compilation. Beyond this, knowledge compilation increasingly plays an important role in explainable AI, an area studying notions of explanation that help understand why a machine learning model makes a particular decision
- Database Theory: Techniques from knowledge compilation have significantly contributed to advancements in query answering algorithms, in particular for enumeration of query answers. These often use succinct or factorized data representations for this purpose which are similar to the representations studied in knowledge compilation. These representations are also used to develop efficient algorithms for computing aggregates and complex analytics in a single pass over the factorized representation. Additionally, knowledge compilation has proven valuable in query answering for probabilistic databases, where uncertainty in the data makes calculating the probability of query results computationally hard.
- Formal Methods: A central endeavor in formal methods is to design practically efficient algorithms for reasoning about formulas in various logical theories, arising out of verification, synthesis, certification and related tasks. The role of representation of formulas in the design of efficient reasoning algorithms has been known for long, an early example being the use of Reduced Ordered Binary Decision Diagrams (ROBDDs) in formal verification and design. Knowledge compilation has therefore always been of interest to the formal methods community. In recent years, several interesting connections of knowledge compilation to important problems in formal methods have been discovered and explored.

Classification
- Artificial Intelligence
- Databases
- Logic in Computer Science
Keywords
- knowledge compilation
- explainable AI
- database theory
- automated reasoning
- tractable probabilistic reasoning