- A Defeasible Logic Implementation of Ethical Reasoning : article in First International Workshop on Computational Machine Ethics (CME-2021) - Dennis, Louise A.; Perea del Olmo, Cristina - Aachen : CEUR, 2021. - 6 pp..
- Big data justice : a case for regulating the global information commons : article in press in "Journal of Politics" - Spiekermann, Kai; Slavny, Adam; Axelsen, David V.; Lawford-Smith, Holly - hicago : Univ. of Chicago, 2020. - 38 pp. - (Journal of Politics ; in press).
- Taxonomy of Trust-Relevant Failures and Mitigation Strategies: article in HRI '20 : Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction - Tolmeijer, Suzanne; Weiss, Astrid; Hanheide, Marc; Lindner, Felix; Tielman, Myrthe L.; Dixon, Clare; Powers, Thomas M. - New York : ACM, 2020. - pp. 3-12.
- Trust and the discrepancy between expectations and actual capabilities of social robots : article in press in In D. Zhang and B. Wei (Eds.), Human-robot interaction: Control, analysis, and design - Malle, Bertram F.; Fischer, Kerstin; Young, James E.; Moon, AJung; Collins, Emily C. - New York : Cambridge Scholars Publishing, 2020. - 23 pp..
Artificial morality, also called machine ethics, is an emerging field in artificial intelligence that explores how autonomous systems can be enhanced with sensitivity and respect for the legal, social, and ethical norms of human society. Academics, engineers, and the public at large, are all wary of autonomous systems, particularly robots, drones, “driverless” cars, etc. Robots will share our physical space, and so how will this change us? With the predictions in hand of roboticists we can paint portraits of how these technical advances will lead to new experiences and how these experiences may change the ways we function in society. Two key issues are dominant, once robot technologies have advanced and yielded new ways we and robots share the world:
- will robots behave ethically, i.e.: as we would want them to, and
- can we trust them to act to our benefit.
Rather than any engineering issues, it is these barriers concerning ethics and trust that are holding back the development and use of autonomous systems. One of the hardest challenges in robotics seems to be reliably determining desirable and undesirable behaviours for robots. Our aim here is to advance the work in these areas, bringing together a range of disciplines that can impact upon these problems.
Some of us organised the Dagstuhl 16222 Engineering Moral agents: From human morality to artificial morality seminar in 2016 with the goal of initiating a conversation between Philosophers studying ethics, Robotics researchers developing novel autonomous machines, and Computer Scientists studying AI and reasoning. This provides a clearer understanding of the issues and several avenues for future collaboration. However, it also highlighted further important areas to be exposed, specifically:
- the extension of ‘ethics’ to also address issues of ‘trust’
- the practical problems of implementing ethical and trustworthy autonomous machines; and
- the new verification and validation techniques that will be required to assess these dimensions.
We expect the seminar to:
- Give researchers across the contributing disciplines an integrated overview of current research in machine ethics and trustworthy robotics from the artificial intelligence side and of relevant areas of philosophy and psychology.
- Open up a communication channel among researchers tackling ethics and trust, bridging the computer science/humanities/social-science divide in these fields.
- Identify the central research questions and challenges concerning (i) the definition and operationalisation of the concept of ethics and trust in autonomous systems; (ii) the formalisation and algorithmization of theories of ethics and trust; and (iii) the relationships between ethics and trust in both human and non-human systems.
- Identify existing and potential societal consequences of these systems. What are the risks, what are the chances, what are beneficial use cases for these systems?
Artificial ethics and trust between humans and autonomous entities both bring together many disciplines which have a vast amount of relevant knowledge and expertise, but which are often inaccessible to one another and insufficiently develop their mutual synergies. Researchers need to communicate to each other their experiences, research interests, and knowledge to move forward. We plan the seminar as a combination of three structures: tutorials, contributed talks, and discussion sessions.
- Ethik und Vertrauen – Prinzipien und Überprüfung moralischer Computer
Press release in German
Academics, engineers, and the public at large, are all wary of autonomous systems, particularly robots, drones, "driver-less" cars, etc. Robots will share our physical space, and so how will this change us? With the predictions of roboticists in hand, we can paint portraits of how these technical advances will lead to new experiences and how these experiences may change the ways we function in society. Two key issues are dominant once robot technologies have advanced further and yielded new ways in which we and robots share the world: (1) will robots behave ethically, i.e. as we would want them to, and (2) can we trust them to act to our benefit. It is more these barriers concerning ethics and trust than any engineering issues that are holding back the widespread development and use of autonomous systems. One of the hardest challenges in robotics is to reliably determine desirable and undesirable behaviours for robots. We are currently undergoing another technology-led transformation in our society driven by the outsourcing of decisions to intelligent, and increasingly autonomous, systems. These systems may be software or embodied units that share our environment. The decisions they make have a direct impact on our lives. With this power to make decisions comes the responsibility for the impact of these decisions - legal, ethical and personal. But how can we ensure that these artificial decision-makers can be trusted to make safe and ethical decisions, especially as the responsibility placed on them increases?
The related previous Dagstuhl Seminar 16222 on Engineering Moral agents: From human morality to artificial morality in 2016, highlighted further important areas to be explored, specifically:
- the extension of 'ethics' to also address issues of 'trust';
- the practical problems of implementing ethical and trustworthy autonomous machines;
- the new verification and validation techniques that will be required to assess these dimensions.
Thus, we thought that the area would benefit from a follow-up seminar which broadens up the scope to Human-Robot Interaction (HRI) and (social) robotics research.
We conducted a four-day seminar (1 day shorter than usual due to Easter) with 35 participants with diverse academic backgrounds including AI, philosophy, social epistemology, Human-Robot Interaction, (social) robotics, logic, linguistics, political science, and computer science. The first day of the seminar was dedicated to seven invited 20-minute talks which served as tutorials. Given the highly interdisciplinary nature of the seminar, the participants from one discipline needed to be quickly brought up to speed with the state of the art in the discipline not their own. Moreover, the goal of these tutorials was to help develop a common language among researchers in the seminar. After these tutorials we gave all participants the chance to introduce their seminar-related research in 5-minute contributed talks. These talks served as a concise way to present oneself and introduce topics for discussion.
Based on these inputs four topics were derived and further explored in working groups through the rest of the seminar: (1) Change of trust, including challenges and methods to foster and repair trust; (2) Towards artificial moral agency; (3) How do we build practical systems involving ethics and trust? (2 sub-groups) (4) The broader context of trust in HRI: Discrepancy between expectations and capabilities of autonomous machines. This report summarizes some of the highlights of those discussions and includes abstracts of the tutorials and some of the contributed talks. Ethical and trustworthy autonomous systems are a topic that will continue to be important in the coming years. We consider it essential to continue these cross-disciplinary efforts, above all as the seminar revealed that the "interactional perspective" of the "human-in-the-loop" is so far underrepresented in the discussions and that also broadening the scope to STS (Science and Technology Studies) and sociology of technology scholars would be relevant.
- Andrea Aler Tubella (University of Umeå, SE) [dblp]
- Einar Duenger Bøhn (University of Agder, NO)
- Jan M. Broersen (Utrecht University, NL) [dblp]
- Raja Chatila (Sorbonne University - Paris, FR) [dblp]
- Emily Collins (University of Liverpool, GB) [dblp]
- Louise A. Dennis (University of Liverpool, GB) [dblp]
- Franz Dietrich (Paris School of Economics & CNRS, FR) [dblp]
- Clare Dixon (University of Liverpool, GB) [dblp]
- Hein Duijf (Free University Amsterdam, NL) [dblp]
- Abeer Dyoub (University of L'Aquila, IT) [dblp]
- Sjur K. Dyrkolbotn (West. Norway Univ. of Applied Sciences - Bergen, NO) [dblp]
- Kerstin I. Eder (University of Bristol, GB) [dblp]
- Kerstin Fischer (University of Southern Denmark - Sonderborg, DK) [dblp]
- Michael Fisher (University of Liverpool, GB) [dblp]
- Marc Hanheide (University of Lincoln, GB) [dblp]
- Holger Hermanns (Universität des Saarlandes, DE) [dblp]
- John F. Horty (University of Maryland - College Park, US) [dblp]
- Maximilian Köhl (Universität des Saarlandes, DE) [dblp]
- Robert Lieck (EPFL - Lausanne, CH) [dblp]
- Felix Lindner (Universität Freiburg, DE) [dblp]
- Christian List (London School of Economics, GB) [dblp]
- Bertram F. Malle (Brown University - Providence, US) [dblp]
- Andreas Matthias (Lingnan University - Hong Kong, HK) [dblp]
- AJung Moon (Open Roboethics Institute - Vancouver, CA) [dblp]
- Marcus Pivato (University of Cergy-Pontoise, FR) [dblp]
- Thomas Michael Powers (University of Delaware - Newark, US) [dblp]
- Teresa Scantamburlo (University of Venice, IT) [dblp]
- Munindar P. Singh (North Carolina State University - Raleigh, US) [dblp]
- Marija Slavkovik (University of Bergen, NO) [dblp]
- Kai Spiekermann (London School of Economics, GB) [dblp]
- Myrthe Tielman (TU Delft, NL) [dblp]
- Suzanne Tolmeijer (Universität Zürich, CH) [dblp]
- Leon van der Torre (University of Luxembourg, LU) [dblp]
- Astrid Weiss (TU Wien, AT) [dblp]
- James E. Young (University of Manitoba - Winnipeg, CA) [dblp]
- artificial intelligence / robotics
- society / human-computer interaction
- verification / logic
- Artificial Morality
- Social Robotics
- Machine Ethics
- Autonomous Systems
- Explainable AI
- Mathematical Philosophy
- Robot Ethics