TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23371

Roadmap for Responsible Robotics

( Sep 10 – Sep 15, 2023 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23371

Organizers

Coordinators
  • Anna Dobrosovestnova (TU Wien, AT)
  • Nick Schuster (Australian National University - Canberra, AU)

Contact

Shared Documents


Schedule

Motivation

Responsible Robotics is an appealing goal. It captures the idea of developing and deploying physical autonomous systems for the benefit of both individuals and society. However, although a popular target, there are, as yet, no robustly reliable routes to achieving Responsible Robotics, and indeed a relative paucity of compelling pictures of precisely what “responsibility” here comprises.

The aim of this Dagstuhl Seminar is to identify the key components of responsibility in this context and then, crucially, describe how we might work towards Responsible Robotics in practice. We focus on four themes associated with Responsible Robotics (trust, fairness, predictability, understandability), which we will refine and extend as necessary. Understanding the interaction between these elements will be crucial to many advanced uses of autonomous robots especially when near humans. Many commentators on social robotics have confined their attention to naming concerns. Our seminar will go beyond criticism in two ways: it will aim to articulate attractive goals to aim at and develop tractable pathways to their implementation in real-world systems.

Trust. The basic understanding of trust relations between people and technology is often best described in terms of reliance as a property of the robot: we want to be able to trust technological systems, in the sense that we can rely on them not to work against our interests. However, Social Robotics significantly increases the complexity of this trust relation, opening up more human-like dimensions of both our trust in robots, and their perceived trustworthiness. Exploring human-robot trust relations can be useful in Responsible Robotics to help translate and transfer requirements into system development.

Fairness. Within AI Ethics, fairness is seen as both a value to be aimed at in socio-technical systems that use AI and as a property of algorithms. There are two issues of fairness that are of main concern: fairness of representation and fairness of allocation. Both have been thoroughly examined in the context of machine learning, but relatively little explored for autonomous robotic systems. Our seminar will consider how to understand the value of fairness in Social Robotics, as well as what is fairness as a property of social robots.

Predictability. Reliability as a property of the robotic system, is one of the most empirically studied trust concepts in human-robot relations. However, we not only require reliability, but predictability both in terms of (a) its decision-making processes and (b) its future behaviour. If truly autonomous, we need clarity in exactly why decisions are made by the robots as well as how reliably they are made. We also address the changes that occur after deployment of a system, such as changes in context, capability, and effectiveness, and how these can affect not only predictability and reliability, but ethics and responsibility.

Understandability. A cornerstone of trust is transparency - it is much harder to use, and especially trust, robotic systems that have opaque decision-making processes. Transparency is widely recognised as being key but remains just the foundation. We require transparency, but also understandability in interactions with our robotic systems. In the seminar we intend to engage in untangling the different concepts involved in understandability and discuss how each of the necessary components, such as transparency and explainability, can be measurably attained in the case of Social Robotics.

Research issues, concerning both clarification and interaction of trust, fairness, predictability, and understandability and the practical routes to ensuring these within Responsible Robotics, will involve a collaborative effort between computer scientists, roboticists, mathematicians, psychologists and philosophers.

Copyright Michael Fisher, Seth Lazar, Marija Slavkovik, and Astrid Weiss

Participants

Related Seminars
  • Dagstuhl Seminar 16222: Engineering Moral Agents - from Human Morality to Artificial Morality (2016-05-29 - 2016-06-03) (Details)
  • Dagstuhl Seminar 19171: Ethics and Trust: Principles, Verification and Validation (2019-04-22 - 2019-04-26) (Details)

Classification
  • Artificial Intelligence
  • Computers and Society
  • Robotics

Keywords
  • Robotics
  • Responsibility
  • Trust
  • Fairness
  • Predictability
  • Understandability
  • Ethics