Dagstuhl Seminar 23371
Roadmap for Responsible Robotics
( Sep 10 – Sep 15, 2023 )
Permalink
Organizers
- Michael Fisher (University of Manchester, GB)
- Seth Lazar (Australian National University - Canberra, AU)
- Marija Slavkovik (University of Bergen, NO)
- Astrid Weiss (TU Wien, AT)
Coordinators
- Anna Dobrosovestnova (TU Wien, AT)
- Nick Schuster (Australian National University - Canberra, AU)
Contact
- Michael Gerke (for scientific matters)
- Simone Schilke (for administrative matters)
Dagstuhl Reports
As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.
- Upload (Use personal credentials as created in DOOR to log in)
Dagstuhl Seminar Wiki
- Dagstuhl Seminar Wiki (Use personal credentials as created in DOOR to log in)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Responsible Robotics is an appealing goal. It captures the idea of developing and deploying physical autonomous systems for the benefit of both individuals and society. However, although a popular target, there are, as yet, no robustly reliable routes to achieving Responsible Robotics, and indeed a relative paucity of compelling pictures of precisely what “responsibility” here comprises.
The aim of this Dagstuhl Seminar is to identify the key components of responsibility in this context and then, crucially, describe how we might work towards Responsible Robotics in practice. We focus on four themes associated with Responsible Robotics (trust, fairness, predictability, understandability), which we will refine and extend as necessary. Understanding the interaction between these elements will be crucial to many advanced uses of autonomous robots especially when near humans. Many commentators on social robotics have confined their attention to naming concerns. Our seminar will go beyond criticism in two ways: it will aim to articulate attractive goals to aim at and develop tractable pathways to their implementation in real-world systems.
Trust. The basic understanding of trust relations between people and technology is often best described in terms of reliance as a property of the robot: we want to be able to trust technological systems, in the sense that we can rely on them not to work against our interests. However, Social Robotics significantly increases the complexity of this trust relation, opening up more human-like dimensions of both our trust in robots, and their perceived trustworthiness. Exploring human-robot trust relations can be useful in Responsible Robotics to help translate and transfer requirements into system development.
Fairness. Within AI Ethics, fairness is seen as both a value to be aimed at in socio-technical systems that use AI and as a property of algorithms. There are two issues of fairness that are of main concern: fairness of representation and fairness of allocation. Both have been thoroughly examined in the context of machine learning, but relatively little explored for autonomous robotic systems. Our seminar will consider how to understand the value of fairness in Social Robotics, as well as what is fairness as a property of social robots.
Predictability. Reliability as a property of the robotic system, is one of the most empirically studied trust concepts in human-robot relations. However, we not only require reliability, but predictability both in terms of (a) its decision-making processes and (b) its future behaviour. If truly autonomous, we need clarity in exactly why decisions are made by the robots as well as how reliably they are made. We also address the changes that occur after deployment of a system, such as changes in context, capability, and effectiveness, and how these can affect not only predictability and reliability, but ethics and responsibility.
Understandability. A cornerstone of trust is transparency - it is much harder to use, and especially trust, robotic systems that have opaque decision-making processes. Transparency is widely recognised as being key but remains just the foundation. We require transparency, but also understandability in interactions with our robotic systems. In the seminar we intend to engage in untangling the different concepts involved in understandability and discuss how each of the necessary components, such as transparency and explainability, can be measurably attained in the case of Social Robotics.
Research issues, concerning both clarification and interaction of trust, fairness, predictability, and understandability and the practical routes to ensuring these within Responsible Robotics, will involve a collaborative effort between computer scientists, roboticists, mathematicians, psychologists and philosophers.

- Dejanira Araiza-Illan (Johnson & Johnson - Singapore, SG) [dblp]
- Kevin Baum (DFKI - Saarbrücken, DE) [dblp]
- Helen Beebee (University of Leeds, GB)
- Raja Chatila (Sorbonne University - Paris, FR) [dblp]
- Sarah Christensen (University of Leeds, GB)
- Simon Coghlan (The University of Melbourne, AU) [dblp]
- Emily Collins (University of Manchester, GB) [dblp]
- Alcino Cunha (University of Minho - Braga, PT & INESC TEC - Porto, PT) [dblp]
- Kate Devitt (Queensland University of Technology - Brisbane, AU) [dblp]
- Anna Dobrosovestnova (TU Wien, AT) [dblp]
- Hein Duijf (LMU München, DE) [dblp]
- Vanessa Evers (University of Twente - Enschede, NL) [dblp]
- Michael Fisher (University of Manchester, GB) [dblp]
- Nico Hochgeschwender (Hochschule Bonn-Rhein-Sieg, DE) [dblp]
- Nadin Kokciyan (University of Edinburgh, GB) [dblp]
- Severin Lemaignan (PAL Robotics - Barcelona, ES) [dblp]
- Sara Ljungblad (University of Gothenburg, SE & Chalmers University of Technology - Göteborg, SE)
- Martin Magnusson (Örebro University, SE) [dblp]
- Masoumeh Mansouri (University of Birmingham, GB) [dblp]
- Michael Milford (Queensland University of Technology - Brisbane, AU) [dblp]
- AJung Moon (McGill University - Montreal, CA) [dblp]
- Thomas Michael Powers (University of Delaware - Newark, US) [dblp]
- Daniel Fernando Preciado Vanegas (Free University of Amsterdam, NL) [dblp]
- Francisco Javier Rodríguez Lera (University of León, ES) [dblp]
- Pericle Salvini (EPFL - Lausanne, CH) [dblp]
- Teresa Scantamburlo (University of Venice, IT) [dblp]
- Nick Schuster (Australian National University - Canberra, AU)
- Marija Slavkovik (University of Bergen, NO) [dblp]
- Ufuk Topcu (University of Texas - Austin, US) [dblp]
- Andrzej Wasowski (IT University of Copenhagen, DK) [dblp]
- Yi Yang (KU Leuven, BE) [dblp]
Related Seminars
Classification
- Artificial Intelligence
- Computers and Society
- Robotics
Keywords
- Robotics
- Responsibility
- Trust
- Fairness
- Predictability
- Understandability
- Ethics