https://www.dagstuhl.de/23371

10. – 15. September 2023, Dagstuhl-Seminar 23371

Roadmap for Responsible Robotics

Organisatoren

Michael Fisher (University of Manchester, GB)
Seth Lazar (Australian National University – Canberra, AU)
Marija Slavkovik (University of Bergen, NO)
Astrid Weiss (TU Wien, AT)

Auskunft zu diesem Dagstuhl-Seminar erteilen

Simone Schilke zu administrativen Fragen

Michael Gerke zu wissenschaftlichen Fragen

Motivation

Responsible Robotics is an appealing goal. It captures the idea of developing and deploying physical autonomous systems for the benefit of both individuals and society. However, although a popular target, there are, as yet, no robustly reliable routes to achieving Responsible Robotics, and indeed a relative paucity of compelling pictures of precisely what “responsibility” here comprises.

The aim of this Dagstuhl Seminar is to identify the key components of responsibility in this context and then, crucially, describe how we might work towards Responsible Robotics in practice. We focus on four themes associated with Responsible Robotics (trust, fairness, predictability, understandability), which we will refine and extend as necessary. Understanding the interaction between these elements will be crucial to many advanced uses of autonomous robots especially when near humans. Many commentators on social robotics have confined their attention to naming concerns. Our seminar will go beyond criticism in two ways: it will aim to articulate attractive goals to aim at and develop tractable pathways to their implementation in real-world systems.

Trust. The basic understanding of trust relations between people and technology is often best described in terms of reliance as a property of the robot: we want to be able to trust technological systems, in the sense that we can rely on them not to work against our interests. However, Social Robotics significantly increases the complexity of this trust relation, opening up more human-like dimensions of both our trust in robots, and their perceived trustworthiness. Exploring human-robot trust relations can be useful in Responsible Robotics to help translate and transfer requirements into system development.

Fairness. Within AI Ethics, fairness is seen as both a value to be aimed at in socio-technical systems that use AI and as a property of algorithms. There are two issues of fairness that are of main concern: fairness of representation and fairness of allocation. Both have been thoroughly examined in the context of machine learning, but relatively little explored for autonomous robotic systems. Our seminar will consider how to understand the value of fairness in Social Robotics, as well as what is fairness as a property of social robots.

Predictability. Reliability as a property of the robotic system, is one of the most empirically studied trust concepts in human-robot relations. However, we not only require reliability, but predictability both in terms of (a) its decision-making processes and (b) its future behaviour. If truly autonomous, we need clarity in exactly why decisions are made by the robots as well as how reliably they are made. We also address the changes that occur after deployment of a system, such as changes in context, capability, and effectiveness, and how these can affect not only predictability and reliability, but ethics and responsibility.

Understandability. A cornerstone of trust is transparency - it is much harder to use, and especially trust, robotic systems that have opaque decision-making processes. Transparency is widely recognised as being key but remains just the foundation. We require transparency, but also understandability in interactions with our robotic systems. In the seminar we intend to engage in untangling the different concepts involved in understandability and discuss how each of the necessary components, such as transparency and explainability, can be measurably attained in the case of Social Robotics.

Research issues, concerning both clarification and interaction of trust, fairness, predictability, and understandability and the practical routes to ensuring these within Responsible Robotics, will involve a collaborative effort between computer scientists, roboticists, mathematicians, psychologists and philosophers.

Motivation text license
  Creative Commons BY 4.0
  Michael Fisher, Seth Lazar, Marija Slavkovik, and Astrid Weiss

Dagstuhl-Seminar Series

Classification

  • Artificial Intelligence
  • Computers And Society
  • Robotics

Keywords

  • Robotics
  • Responsibility
  • Trust
  • Fairness
  • Predictability
  • Understandability
  • Ethics

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.