10. – 14. April 2023, Dagstuhl-Seminar 23151

Normative Reasoning for AI


Agata Ciabattoni (TU Wien, AT)
John F. Horty (University of Maryland – College Park, US)
Marija Slavkovik (University of Bergen, NO)
Leon van der Torre (University of Luxembourg, LU)

Auskunft zu diesem Dagstuhl-Seminar erteilen

Christina Schwarz zu administrativen Fragen

Marsha Kleinbauer zu wissenschaftlichen Fragen


Normative reasoning is reasoning about normative matters, such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can – arguably, should – play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons are purely practical – we want to make the correct decision – but they can also be theoretical – as they are in theoretical ethics. Either way, the same questions are crucial in designing autonomous agents responsibly.

This Dagstuhl Seminar will bring together experts in computer science, logic, philosophy, ethics, and law with the overall goal of finding effective ways of embedding normative reasoning in AI systems. While aiming to keep an eye on every aspect of normative reasoning in AI, four topics will more specifically be the focus.

Normative reasoning for AI ethics. The first topic is concerned with the question of how the use of normative reasoning in existing fields like bioethics and AI & law can inspire the new area of AI ethics. Modern bioethics, for instance, has developed in response to the problems with applying high moral theory to concrete cases: the fact that we don’t know which ethical theory is true, that it’s often unclear how high-level ethical theories would resolve a complex case; and the fact that the principle of publicity demands that we justify the resolution to a problem in a way that most people can understand. Reacting to these problems, the field of bioethics has moved away from top-down applications of high moral theory toward alternative approaches with their own unique methods. These approaches are meant to be useful for reasoning about and resolving concrete cases, even if we don’t know which ethical theory is true. Since AI ethics faces similar problems, we believe that a better understanding of approaches in bioethics holds much promise for future research in AI ethics.

Deontic explanations. The second topic is concerned with the use of formal methods in general and deontic logic and the theory of normative systems in particular in providing deontic explanations or answers to why questions with deontic content, that is, questions like “Why must I wear a face mask?”, “Why am I forbidden to leave the house at night, while he is not?”, “Why has the law of privacy been changed in this way?” Deontic explanations are called for in widely different contexts – including individual and institutional decision-making, policy-making, and retrospective justifications of actions – and so there is a wide variety of them. Nevertheless, they are unified by their essentially practical nature.

Defeasible deontic logic and formal argumentation. The third topic of the seminar is concerned with the role of nonmonotonicity in deontic logic and the potential use of formal argumentation. In the area of deontic logic, normative reasoning is associated with a set of well-known benchmark examples and challenges many of which have to do with the handling of contrary-to-duty scenarios and deontic conflicts. While a plethora of formal methods has been developed to account for contrary-to-duty reasoning and to handle deontic conflicts, many challenges remain open. One specific goal of the seminar is to reflect on the role of nonmonotonicity in deontic logic, as well as the use of techniques from formal argumentation to define defeasible deontic logic that would address the open challenges.

From theory to tools. The fourth topic of the seminar concerns implementing and experimenting with normative reasoning. One of the themes we plan to discuss is the integration of normative reasoning techniques with reinforcement learning in the design of ethical autonomous agents. Another one is the automatization of deontic explanations using Logikey and other frameworks.

Motivation text license
  Creative Commons BY 4.0
  Agata Ciabattoni, Aleks Knoks, and Leon van der Torre


  • Artificial Intelligence
  • Logic In Computer Science
  • Multiagent Systems


  • Deontic logic
  • Autonomous agents
  • AI ethics
  • Deontic explanations


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.