22. – 26. April 2019, Dagstuhl-Seminar 19171

Ethics and Trust: Principles, Verification and Validation


Michael Fisher (University of Liverpool, GB)
Christian List (London School of Economics, GB)
Marija Slavkovik (University of Bergen, NO)
Astrid Weiss (TU Wien, AT)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team


Dagstuhl Report, Volume 9, Issue 4 Dagstuhl Report
Dagstuhl's Impact: Dokumente verfügbar
Programm des Dagstuhl-Seminars [pdf]

Press Room


Academics, engineers, and the public at large, are all wary of autonomous systems, particularly robots, drones, "driver-less" cars, etc. Robots will share our physical space, and so how will this change us? With the predictions of roboticists in hand, we can paint portraits of how these technical advances will lead to new experiences and how these experiences may change the ways we function in society. Two key issues are dominant once robot technologies have advanced further and yielded new ways in which we and robots share the world: (1) will robots behave ethically, i.e. as we would want them to, and (2) can we trust them to act to our benefit. It is more these barriers concerning ethics and trust than any engineering issues that are holding back the widespread development and use of autonomous systems. One of the hardest challenges in robotics is to reliably determine desirable and undesirable behaviours for robots. We are currently undergoing another technology-led transformation in our society driven by the outsourcing of decisions to intelligent, and increasingly autonomous, systems. These systems may be software or embodied units that share our environment. The decisions they make have a direct impact on our lives. With this power to make decisions comes the responsibility for the impact of these decisions - legal, ethical and personal. But how can we ensure that these artificial decision-makers can be trusted to make safe and ethical decisions, especially as the responsibility placed on them increases?

The related previous Dagstuhl Seminar 16222 on Engineering Moral agents: From human morality to artificial morality in 2016, highlighted further important areas to be explored, specifically:

  • the extension of 'ethics' to also address issues of 'trust';
  • the practical problems of implementing ethical and trustworthy autonomous machines;
  • the new verification and validation techniques that will be required to assess these dimensions.

Thus, we thought that the area would benefit from a follow-up seminar which broadens up the scope to Human-Robot Interaction (HRI) and (social) robotics research.

We conducted a four-day seminar (1 day shorter than usual due to Easter) with 35 participants with diverse academic backgrounds including AI, philosophy, social epistemology, Human-Robot Interaction, (social) robotics, logic, linguistics, political science, and computer science. The first day of the seminar was dedicated to seven invited 20-minute talks which served as tutorials. Given the highly interdisciplinary nature of the seminar, the participants from one discipline needed to be quickly brought up to speed with the state of the art in the discipline not their own. Moreover, the goal of these tutorials was to help develop a common language among researchers in the seminar. After these tutorials we gave all participants the chance to introduce their seminar-related research in 5-minute contributed talks. These talks served as a concise way to present oneself and introduce topics for discussion.

Based on these inputs four topics were derived and further explored in working groups through the rest of the seminar: (1) Change of trust, including challenges and methods to foster and repair trust; (2) Towards artificial moral agency; (3) How do we build practical systems involving ethics and trust? (2 sub-groups) (4) The broader context of trust in HRI: Discrepancy between expectations and capabilities of autonomous machines. This report summarizes some of the highlights of those discussions and includes abstracts of the tutorials and some of the contributed talks. Ethical and trustworthy autonomous systems are a topic that will continue to be important in the coming years. We consider it essential to continue these cross-disciplinary efforts, above all as the seminar revealed that the "interactional perspective" of the "human-in-the-loop" is so far underrepresented in the discussions and that also broadening the scope to STS (Science and Technology Studies) and sociology of technology scholars would be relevant.

Summary text license
  Creative Commons BY 3.0 Unported license
  Astrid Weiss, Michael Fisher, Marija Slavkovik, and Christian List

Dagstuhl-Seminar Series


  • Artificial Intelligence / Robotics
  • Society / Human-computer Interaction
  • Verification / Logic


  • Verification
  • Artificial Morality
  • Social Robotics
  • Machine Ethics
  • Autonomous Systems
  • Explainable AI
  • Safety
  • Trust
  • Mathematical Philosophy
  • Robot Ethics


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.