Dagstuhl Seminar 23151
Normative Reasoning for AI
( Apr 10 – Apr 14, 2023 )
Permalink
Organizers
- Agata Ciabattoni (TU Wien, AT)
- John F. Horty (University of Maryland - College Park, US)
- Marija Slavkovik (University of Bergen, NO)
- Leon van der Torre (University of Luxembourg, LU)
Contact
- Marsha Kleinbauer (for scientific matters)
- Christina Schwarz (for administrative matters)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
Normative reasoning is reasoning about normative matters, such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can – arguably, should – play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons are purely practical – we want to make the correct decision – but they can also be theoretical – as they are in theoretical ethics. Either way, the same questions are crucial in designing autonomous agents responsibly.
This Dagstuhl Seminar will bring together experts in computer science, logic, philosophy, ethics, and law with the overall goal of finding effective ways of embedding normative reasoning in AI systems. While aiming to keep an eye on every aspect of normative reasoning in AI, four topics will more specifically be the focus.
Normative reasoning for AI ethics. The first topic is concerned with the question of how the use of normative reasoning in existing fields like bioethics and AI & law can inspire the new area of AI ethics. Modern bioethics, for instance, has developed in response to the problems with applying high moral theory to concrete cases: the fact that we don’t know which ethical theory is true, that it’s often unclear how high-level ethical theories would resolve a complex case; and the fact that the principle of publicity demands that we justify the resolution to a problem in a way that most people can understand. Reacting to these problems, the field of bioethics has moved away from top-down applications of high moral theory toward alternative approaches with their own unique methods. These approaches are meant to be useful for reasoning about and resolving concrete cases, even if we don’t know which ethical theory is true. Since AI ethics faces similar problems, we believe that a better understanding of approaches in bioethics holds much promise for future research in AI ethics.
Deontic explanations. The second topic is concerned with the use of formal methods in general and deontic logic and the theory of normative systems in particular in providing deontic explanations or answers to why questions with deontic content, that is, questions like “Why must I wear a face mask?”, “Why am I forbidden to leave the house at night, while he is not?”, “Why has the law of privacy been changed in this way?” Deontic explanations are called for in widely different contexts – including individual and institutional decision-making, policy-making, and retrospective justifications of actions – and so there is a wide variety of them. Nevertheless, they are unified by their essentially practical nature.
Defeasible deontic logic and formal argumentation. The third topic of the seminar is concerned with the role of nonmonotonicity in deontic logic and the potential use of formal argumentation. In the area of deontic logic, normative reasoning is associated with a set of well-known benchmark examples and challenges many of which have to do with the handling of contrary-to-duty scenarios and deontic conflicts. While a plethora of formal methods has been developed to account for contrary-to-duty reasoning and to handle deontic conflicts, many challenges remain open. One specific goal of the seminar is to reflect on the role of nonmonotonicity in deontic logic, as well as the use of techniques from formal argumentation to define defeasible deontic logic that would address the open challenges.
From theory to tools. The fourth topic of the seminar concerns implementing and experimenting with normative reasoning. One of the themes we plan to discuss is the integration of normative reasoning techniques with reinforcement learning in the design of ethical autonomous agents. Another one is the automatization of deontic explanations using Logikey and other frameworks.

- Guillaume Aucher (University of Rennes, FR) [dblp]
- Kevin Baum (DFKI - Saarbrücken, DE) [dblp]
- Christoph Benzmüller (Universität Bamberg, DE) [dblp]
- Jan M. Broersen (Utrecht University, NL) [dblp]
- Pedro Cabalar (University of Coruña, ES) [dblp]
- Ilaria Canavotto (University of Maryland - College Park, US) [dblp]
- Agata Ciabattoni (TU Wien, AT) [dblp]
- Célia da Costa Pereira (Université Côte d’Azur - Sophia Antipolis, FR) [dblp]
- Mehdi Dastani (Utrecht University, NL) [dblp]
- Louise A. Dennis (University of Manchester, GB) [dblp]
- Frank Dignum (University of Umeå, SE) [dblp]
- Virginia Dignum (University of Umeå, SE) [dblp]
- Huimin Dong (Sun Yat-Sen University - Zhuhai, CN) [dblp]
- Thomas Eiter (TU Wien, AT) [dblp]
- Eleonora Giunchiglia (TU Wien, AT)
- Guido Governatori (Tarragindi, AU) [dblp]
- John F. Horty (University of Maryland - College Park, US) [dblp]
- Joris Hulstijn (University of Luxembourg, LU) [dblp]
- Aleks Knoks (University of Luxembourg, LU) [dblp]
- Emiliano Lorini (CNRS - Toulouse, FR) [dblp]
- Bertram F. Malle (Brown University - Providence, US) [dblp]
- Réka Markovich (University of Luxembourg, LU) [dblp]
- Eric Pacuit (University of Maryland - College Park, US) [dblp]
- Xavier Parent (TU Wien, AT) [dblp]
- Bijan Parsia (University of Manchester, GB) [dblp]
- Adrian Paschke (FU Berlin, DE) [dblp]
- Henry Prakken (Utrecht University, NL) [dblp]
- Antonino Rotolo (University of Bologna, IT) [dblp]
- Ken Satoh (National Institute of Informatics - Tokyo, JP) [dblp]
- Marija Slavkovik (University of Bergen, NO) [dblp]
- Kai Spiekermann (London School of Economics, GB) [dblp]
- Christian Straßer (Ruhr-Universität Bochum, DE)
- Leon van der Torre (University of Luxembourg, LU) [dblp]
Classification
- Artificial Intelligence
- Logic in Computer Science
- Multiagent Systems
Keywords
- deontic logic
- autonomous agents
- AI ethics
- deontic explanations