Dagstuhl Seminar 25061
Logic and Neural Networks
( Feb 02 – Feb 07, 2025 )
Permalink
Organizers
- Vaishak Belle (University of Edinburgh, GB)
- Michael Benedikt (University of Oxford, GB)
- Dana Drachsler Cohen (Technion - Haifa, IL)
- Daniel Neider (TU Dortmund, DE)
Contact
- Marsha Kleinbauer (for scientific matters)
- Jutka Gasiorowski (for administrative matters)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Motivation
Logic and learning are central to Computer Science, and in particular to AI-related research. Already Alan Turing envisioned in his 1950 “Computing Machinery and Intelligence” paper [1] a combination of statistical (ab initio) machine learning and an “unemotional” symbolic language such as logic. The combination of logic and learning has received new impetus from the spectacular success of deep learning systems. As part of these developments, several key roles for logical rules have been identified: As a means of expressing safety properties that a network should satisfy; As a way of providing “weak supervision”, that can be utilized in training, to augment or to substitute for direct supervision; As a means of explaining properties of networks, or explanations of the decisions produced by them. With the identification of these roles, a number of core challenges have arisen: Verifying logic-based properties of networks, Enforcing logic-based properties during training; Utilizing logic-based properties in tandem with traditional supervision within learning to train networks; and Producing logic-based explanations of neural network outcomes. Clearly, these challenges have significant synergy between them. The goal of this seminar was to bring together researchers from various communities related to utilizing constraints in deep learning, and to create bridges between them via the exchange of ideas.
Design of the Seminar
The seminar focused on a set of interrelated topics connected to logic and neural networks:
- Verifying logical constraints on neural networks. Despite being successful in various tasks, neural networks have also been shown to be susceptible to various attacks (e.g., adversarial attacks [2]) or prone to biased decisions (e.g., in Amazon’s systems). To understand the resilience of networks to these phenomena, it is crucial to prove that networks satisfy safety properties, such as local robustness and fairness. These are captured via logical constraints, defined on specific inputs in a given dataset (e.g., local robustness) or universally on any input (e.g., fairness and global robustness). Many works have proposed verification systems for these properties [3], typically leveraging constraint solvers [4, 5] or static analysis [6, 7]. Constraints can derive from a number of motivations: security/safety, fairness, or interpretability. Despite the active research on verifying these properties, existing approaches still do not scale to very deep networks, which are ubiquitous in practice. We believe it is viable to understand how to push forward the analysis capabilities to use them for large and deep networks. This will have an impact both for academy and industry, since it will increase the users’ trust in practical neural network-based systems.
- Enforcement of constraints on neural networks. Logical rules can represent important safety properties and prior knowledge into the training of neural networks. For example, in a manufacturing setting, we may wish to encode that an actuator for a robotic arm does not exceed some threshold (e.g., causing the arm to move at a hazardous speed). Another example is a self-driving car, where a controller should be known to operate within a predefined set of constraints (e.g., the car should always stop completely when facing a human). In such safety-critical domains, machine learning solutions must guarantee to operate within distinct boundaries that are specified by logical, arithmetic, and geometric rules. Techniques include specialized loss functions [8], which can be augmented with additional layers within a neural architecture. These approaches compile the constraints into the loss function of the training algorithm, by quantifying the extent to which the output of the network violates the constraints. This is appealing as logical constraints are easy to elicit from people. However, the solution outputted by the network is designed to minimize the loss function – which combines both data and constraints – rather than to guarantee the satisfaction of the domain constraints. So this is an important open problem.
- Training using logic and traditional supervision. A major of impetus for a synthesis of logic and learning relates to paucity of supervision. In many regimes explicit supervision is extremely limited, and synthetic data generation may be infeasible. A promising approach to augment supervision is via the use of external knowledge. The approach has been used in domains as distinct as scene recognition [9] and parsing [10]. Approaches that integrate constraint-based supervision with traditional supervision have arisen simultaneously in many areas of artificial intelligence. While the focus of our seminar is constraints expressed in general-purpose logics, we look for connections with constraint-based approaches to learning from other areas, such as physics.
- Explaining neural networks via logic. A critical issue with black box models, particularly neural networks, is understanding their decision boundaries. An important strategy employed in recent years involves attempting to extract decision trees, logical rules, and other deterministic machines from these neural networks [11, 12, 13, 14]. This can be seen as a strategy for post-hoc explanation [15]. Most approaches for rule extraction use template-based approaches to explore patterns in pre-trained models, with a focus on characteristics and properties of entities such as people, places, or things. However, template-based approaches do show sensitivity to template formulation, highlighting the need to explore alternative strategies to probe pre-trained models. They are often based on a combination of techniques from Bayesian Structure Learning, Inductive Logic Programming [16], and Distillation [15]. Explanation and approximation via logics have also arisen in Graph Neural Networks [17]. An interesting phenomenon is that one of the languages used for explanation is Datalog, which is also prominent in the verification community. The ability to approximate networks by logics is closely-related to attempts to understand the expressiveness of neural approaches in terms of logics [18, 19].
Summary of Seminar Activities
The seminar was attended by 38 researchers across various communities including logic, formal verification, machine learning, deep learning, program synthesis, graph neural networks, expressiveness, explainability, theorem proving, neural-symbolic learning, and databases. The seminar participants included senior and junior researchers, including graduate students, post-doctoral researchers, faculty members, and industry experts. The seminar was conducted through talks and breakout sessions, with breaks for discussion between the attendees. Overall, there were 19 talks, and two main breakout sessions. The talks included a range of presentations on recent advances in the interrelated fields of logic and neural networks, as previously discussed. Some talks also provided broader overviews of related areas, such as formal verification of neural networks and program synthesis. The first breakout session was divided into four groups based on the participants’ main areas of research: verification, expressivity, explainability, and learning with background knowledge and constraints. Each group discussed several topic-specific questions: (1) the open challenges, (2) the value proposition, (3) potential “killer applications” or teaching curricula, and (4) drafting a concise manifesto. The second breakout session was divided into three groups (based on participants’ choices), each focused on integrating interrelated topics: (1) verification and constraints, (2) explainability, expressiveness, and constraints, and (3) verification and explainability. Each group examined several issues concerning the interplay of these areas, including: (1) prior work, (2) open challenges, (3) real-world motivations and applications, and (4) short- and long-term project ideas.
Conclusion
We consider the seminar a success and believe it achieved several goals that will help strengthen connections among the fields of neural-network verification, logic, explainability, and expressivity. These include: (1) fostering links among the participating researchers, (2) generating a set of open challenges, goals, and future research directions, and (3) providing a more unified view of current approaches to these interrelated topics. We also hope the seminar will catalyze the further development of benchmarks for applying logic in neural networks. Finally, the seminar’s format – featuring talks, ample time for discussion, and breakout sessions – received positive feedback from participants.
References
- A. M. Turing: Computing machinery and intelligence. In Mind, vol. LIX (1950)
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, Rob Fergus: Intriguing properties of neural networks. In ICLR (2014).
- Linyi Li, Tao Xie, Bo Li. SoK: Certified robustness for deep neural networks. In IEEE Symposium on Security and Privacy (2023).
- Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer: Reluplex: a calculus for reasoning about deep neural networks. In Formal Methods Syst. Des. (2022).
- Vincent Tjeng, Kai Yuanqing Xiao, Russ Tedrake: Evaluating robustness of neural networks with mixed integer programming. In ICLR (2019).
- Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin T. Vechev: AI2: safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy (2018).
- Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel: Efficient neural network robustness certification with general activation functions. In NeurIPS (2018).
- Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, Guy Van den Broeck: A semantic loss function for deep learning with symbolic knowledge. In ICML (2018).
- Eleonora Giunchiglia, Mihaela Catalina Stoian, Salman Khan, Fabio Cuzzolin, Thomas Lukasiewicz: ROAD-R: the autonomous driving dataset with logical requirements. In Mach. Learn. (2023).
- Chen Liang, Jonathan Berant, Quoc V. Le, Ken Forbus, Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In ACL (2017).
- Jan Ruben Zilke, Eneldo Loza Mencía, Frederik Janssen: Deepred–rule extraction from deep neural networks. In DS (2016).
- Robert Andrews, Joachim Diederich, Alan B Tickle: Survey and critique of techniques for extracting rules from trained artificial neural networks. In Knowledge-based systems (1995).
- Tameru Hailesilassie: Rule extraction algorithm for deep neural networks: A review. In CoRR abs/1610.05267 (2016).
- Hiroshi Tsukimoto: Extracting rules from trained neural networks. In IEEE Transactions on Neural networks (2000).
- Vaishak Belle, Ioannis Papantonis. Principles and practice of explainable machine learning. In Frontiers in big Data (2021).
- Stephen Muggleton, Luc De Raedt, David Poole, Ivan Bratko, Peter Flach, Katsumi Inoue, Ashwin Srinivasan: ILP turns 20. In Machine learning (2012).
- David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, and Egor V. Kostylev: On the correspondence between monotonic max-sum gnns and datalog. In KR (2023).
- Floris Geerts, Juan L. Reutter: Expressiveness and approximation properties of graph neural networks. In ICLR (2022).
- David Chiang, Peter Cholak, Anand Pillay: Tighter bounds on the expressivity of transformer encoders. In ICML (2023).
Vaishak Belle, Michael Benedikt, Dana Drachsler Cohen, and Daniel Neider
Logic and learning are central to Computer Science, and in particular to AI-related research. Already Alan Turing envisioned in his 1950 “Computing Machinery and Intelligence” paper a combination of statistical (ab initio) machine learning and an “unemotional” symbolic language such as logic. The combination of logic and learning has received new impetus from the spectacular success of deep learning systems.
The goal of this Dagstuhl Seminar is to bring together researchers from various communities related to utilizing logical constraints in deep learning and to create bridges between them via the exchange of ideas.
The seminar will focus on a set of interrelated topics:
Enforcement of constraints on neural networks. Looking at methods to train neural networks to enforce formally stated requirements. This is a crucial aspect of AI safety.
Verifying logical constraints on neural networks. Leveraging logic to formalize safety properties, and developing methods to prove that a network satisfies them. Examples of safety properties include local and global robustness to adversarial example attacks as well as various fairness properties.
Training using logic to supplement traditional supervision. Augmenting supervision via the use of external knowledge, which is crucial for settings where explicit supervision is extremely limited and synthetic data generation is infeasible. For example, this approach has been utilized in scene recognition and parsing.
Explanation and approximation via logic. Leveraging logical languages to explain a model’s behavior. This is crucial for black box models, for various network types, including Graph Neural Networks. This topic is closely related to characterizations of the expressiveness of neural models in terms of logic and descriptive complexity, a very active area of recent research.
This Dagstuhl Seminar aims not at studying these areas as separate components, but in exploring common techniques among them as well as connections to other communities in machine learning that share the same broad goals. For example, in looking at training using rules, we will investigate links with other weakly-supervised learning approaches, such as training based on physical conservation laws. In looking at verification and enforcement, we will look for contact with those working more broadly on AI safety.
The high-level results expected to be produced include the following:
- Fostering links among researchers working on the application of learning to logical artifacts. This will include creating an understanding of the work in different applications, as well as an increased understanding of the formal connections between these applications.
- Generating a set of goals, challenges, and research directions for the application of logic to neural networks.
- Providing a more unified view of the current approaches on the interaction between neural networks and logic and identify gaps in the existing formalisms that attempt this synthesis.
- Creating bridges between researchers in computational logic and those in machine learning and identify ways in which enhanced interaction between the communities can continue.
- Serving as a catalyst for the further development of benchmarks related to the use of logic in neural networks.
Vaishak Belle, Michael Benedikt, Dana Drachsler Cohen, and Daniel Neider
Please log in to DOOR to see more details.
- Pablo Barcelo (PUC - Santiago de Chile, CL) [dblp]
- Vaishak Belle (University of Edinburgh, GB) [dblp]
- Michael Benedikt (University of Oxford, GB) [dblp]
- Alexandra Bugariu (MPI-SWS - Kaiserslautern, DE)
- Luc De Raedt (KU Leuven, BE) [dblp]
- Dana Drachsler Cohen (Technion - Haifa, IL) [dblp]
- Sophie Fellenz (RPTU Kaiserslautern-Landau, DE) [dblp]
- Floris Geerts (University of Antwerp, BE) [dblp]
- Julien Girard-Satabin (CEA de Saclay - Gif-sur-Yvette, FR) [dblp]
- Eleonora Giunchiglia (Imperial College London, GB) [dblp]
- Dominik Hintersdorf (TU Darmstadt, DE) [dblp]
- Vivian Holzapfel (Leibniz Universität Hannover, DE) [dblp]
- Omri Isac (The Hebrew University of Jerusalem, IL)
- Matthias Lanzinger (TU Wien, AT) [dblp]
- Anji Liu (UCLA, US)
- Anna Lukina (TU Delft, NL) [dblp]
- Pierre-Jean Meyer (Gustave Eiffel University - Villeneuve d'Ascq, FR) [dblp]
- Emmanuel Müller (TU Dortmund, DE) [dblp]
- Daniel Neider (TU Dortmund, DE) [dblp]
- Ana Ozaki (University of Oslo, NO) [dblp]
- Martin Ritzert (Universität Göttingen, DE) [dblp]
- Patrick Schramowski (DFKI - Darmstadt, DE) [dblp]
- Mahmood Sharif (Tel Aviv University, IL)
- Gagandeep Singh (University of Illinois - Urbana-Champaign, US) [dblp]
- Armando Solar-Lezama (MIT - Cambridge, US) [dblp]
- Mihaela Stoian (University of Oxford, GB) [dblp]
- Mislav Stojanovic (TU Dortmund, DE)
- Lukas Struppek (TU Darmstadt, DE) [dblp]
- David Tena Cucala (Royal Holloway, University of London, GB) [dblp]
- Ashish Tiwari (Microsoft Corporation - Redmond, US) [dblp]
- Caterina Urban (INRIA & ENS Paris, FR) [dblp]
- Jan Van den Bussche (Hasselt University, BE) [dblp]
- Frank van Harmelen (VU Amsterdam, NL) [dblp]
- Emile van Krieken (University of Edinburgh, GB) [dblp]
- Jonni Virtema (University of Sheffield, GB) [dblp]
- Hongseok Yang (KAIST - Daejeon, KR) [dblp]
- Tom Yuviler (Technion - Haifa, IL)
- Zsolt Zombori (Alfréd Rényi Institute of Mathematics - Budapest, HU) [dblp]
Classification
- Logic in Computer Science
- Machine Learning
Keywords
- machine learning
- learning theory
- logic
- verification
- safety

Creative Commons BY 4.0
