TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23151

Normative Reasoning for AI

( Apr 10 – Apr 14, 2023 )


Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23151

Organizers

Contact

Dagstuhl Reports

As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.

  • Upload (Use personal credentials as created in DOOR to log in)

Dagstuhl Seminar Wiki

Shared Documents

Schedule

Motivation

Normative reasoning is reasoning about normative matters, such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can – arguably, should – play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons are purely practical – we want to make the correct decision – but they can also be theoretical – as they are in theoretical ethics. Either way, the same questions are crucial in designing autonomous agents responsibly.

This Dagstuhl Seminar will bring together experts in computer science, logic, philosophy, ethics, and law with the overall goal of finding effective ways of embedding normative reasoning in AI systems. While aiming to keep an eye on every aspect of normative reasoning in AI, four topics will more specifically be the focus.

Normative reasoning for AI ethics. The first topic is concerned with the question of how the use of normative reasoning in existing fields like bioethics and AI & law can inspire the new area of AI ethics. Modern bioethics, for instance, has developed in response to the problems with applying high moral theory to concrete cases: the fact that we don’t know which ethical theory is true, that it’s often unclear how high-level ethical theories would resolve a complex case; and the fact that the principle of publicity demands that we justify the resolution to a problem in a way that most people can understand. Reacting to these problems, the field of bioethics has moved away from top-down applications of high moral theory toward alternative approaches with their own unique methods. These approaches are meant to be useful for reasoning about and resolving concrete cases, even if we don’t know which ethical theory is true. Since AI ethics faces similar problems, we believe that a better understanding of approaches in bioethics holds much promise for future research in AI ethics.

Deontic explanations. The second topic is concerned with the use of formal methods in general and deontic logic and the theory of normative systems in particular in providing deontic explanations or answers to why questions with deontic content, that is, questions like “Why must I wear a face mask?”, “Why am I forbidden to leave the house at night, while he is not?”, “Why has the law of privacy been changed in this way?” Deontic explanations are called for in widely different contexts – including individual and institutional decision-making, policy-making, and retrospective justifications of actions – and so there is a wide variety of them. Nevertheless, they are unified by their essentially practical nature.

Defeasible deontic logic and formal argumentation. The third topic of the seminar is concerned with the role of nonmonotonicity in deontic logic and the potential use of formal argumentation. In the area of deontic logic, normative reasoning is associated with a set of well-known benchmark examples and challenges many of which have to do with the handling of contrary-to-duty scenarios and deontic conflicts. While a plethora of formal methods has been developed to account for contrary-to-duty reasoning and to handle deontic conflicts, many challenges remain open. One specific goal of the seminar is to reflect on the role of nonmonotonicity in deontic logic, as well as the use of techniques from formal argumentation to define defeasible deontic logic that would address the open challenges.

From theory to tools. The fourth topic of the seminar concerns implementing and experimenting with normative reasoning. One of the themes we plan to discuss is the integration of normative reasoning techniques with reinforcement learning in the design of ethical autonomous agents. Another one is the automatization of deontic explanations using Logikey and other frameworks.

Copyright Agata Ciabattoni, Aleks Knoks, and Leon van der Torre

Participants

Classification
  • Artificial Intelligence
  • Logic in Computer Science
  • Multiagent Systems

Keywords
  • deontic logic
  • autonomous agents
  • AI ethics
  • deontic explanations