May 29 – June 3 , 2016, Dagstuhl Seminar 16222
Engineering Moral Agents - from Human Morality to Artificial Morality
Marija Slavkovik (University of Bergen, NO)
1 / 4 >
For support, please contact
Artificial morality, also called "machine ethics", is an emerging field in artificial intelligence that explores how artificial agents can be enhanced with sensitivity to and respect for the legal, social, and ethical norms of human society. This field is also concerned with the possibility and necessity of transferring the responsibility for the decisions and actions of the artificial agents from their designers onto the agents themselves. Additional challenging tasks include, but are not limited to: the identification of (un)desired ethical behaviour in artificial agents and its adjustment; the certification and verification of the artificial agents' ethical capacities; the identification of the adequate level of responsibility of an artificial agent; the dependence between the responsibility and the level of autonomy that an artificial agent possesses; and the place of artificial agents within our societal, legal, and ethical normative systems.
Artificial morality has become increasingly salient since the early years of this century, though its origins are older. Isaac Asimov already famously proposed three laws of robotics, requiring that, first, robots must not harm humans or allow them to be harmed; second, robots must obey human orders provided this does not conflict with the first law; and third, robots must protect themselves provided this does not conflict with the first two laws.
Although there has been some discussion and analysis of possible approaches to artificial morality in computer science and related fields, the "algorithmization" and adaptation of the ethical systems developed for human beings is both an open research problem and a difficult engineering challenge. At the same time, formally and mathematically oriented approaches to ethics are attracting the interest of an increasing number of researchers, including in philosophy. As this is still in its infancy, we thought that the area could benefit from an "incubator event" such as an interdisciplinary Dagstuhl seminar. We conducted a five-day seminar with twenty six participants with diverse academic backgrounds including robotics, automated systems, philosophy, law, security, and political science. The first part of the seminar was dedicated to facilitating the cross-disciplinary communication by giving researchers across the contributing disciplines an integrated overview of current research in machine morality from the artificial intelligence side, and of relevant areas of philosophy from the moral-philosophy, action-theoretic, and social-scientific side. We accomplished this through tutorials and brief self-introductory talks. The second part of the seminar was dedicated to discussions around two key topics: how to formalise ethical theories and reasoning, and how to implement ethical reasoning. This report summarises some of the highlights of those discussions and includes the abstracts of the tutorials and some of the self-introductory talks. We also summarise our conclusions and observations from the seminar.
Although scientists without a philosophical background tend to have a general view of moral philosophy, a formal background and ability to pinpoint key advancements and central work in it cannot be taken for granted. Kevin Baum from the University of Saarland presented a project currently in progress at his university and in which he is involved, of teaching formal ethics to computer-science students. There was great interest in the material of that course from the computer science participants of the seminar. In the first instance, a good catalyst for the computer science--moral philosophy cooperation would be a comprehensive "data base" of moral-dilemma examples from the literature that can be used as benchmarks when formalising and implementing moral reasoning.
The formalisation of moral theories for the purpose of using them as a base for implementing moral reasoning in machines, and artificial autonomous entities in general, was met with great enthusiasm among non-computer scientists. Such work gives a unique opportunity to test the robustness of moral theories.
It is generally recognised that there exist two core approaches to artificial morality: explicitly constraining the potentially immoral actions of the AI system; and training the AI system to recognise and resolve morally challenging situations and actions. The first, constrained-based approach consists in finding a set of rules and guidelines that the artificial intentional entity has to follow, or that we can use to pre-check and constrain its actions. By contrast, training approaches consist in applying techniques such as machine learning to "teach" an artificial intentional entity to recognise morally problematic situations and to resolve conflicts, much as people are educated by their carers and community to become moral agents. Hybrid approaches combining both methods were also considered.
It emerged that a clear advantage of constraining the potentially immoral actions of the entity, or the "symbolic approach" to ethical reasoning, is the possibility to use formal verification to test that the reasoning works as intended. If the learning approach is used, the learning should happen before the autonomous system is deployed for its moral behaviour to be tested. Unfortunately, the machine-learning community was severely under-represented at the seminar, and more efforts should be devoted to include them in future discussions. The discussions also revealed that implanting moral reasoning into autonomous systems opens up many questions regarding the level of assurance that should be given to users of such systems, as well as the level of transparency into the moral-reasoning software that should be given to users, regulators, governments, and so on.
Machine ethics is a topic that will continue to develop in the coming years, particularly with many industries preparing to launch autonomous systems into our societies in the next five years. It is essential to continue open cross-disciplinary discussions to make sure that the machine reasoning implemented in those machines is designed by experts who have a deep understanding of the topic, rather than by individual companies without the input of such experts. It was our impression as organisers, perhaps immodest, that the seminar advanced the field of machine ethics and opened new communication channels. Therefore we hope to propose a second seminar in 2018 on the same topic, using the experience and lessons we gained here, to continue the discussion and flow of cross-disciplinary collaboration.
Creative Commons BY 3.0 Unported license
Michael Fisher and Christian List and Marija Slavkovic and Alan FT Winfield
- Artificial Intelligence / Robotics
- Semantics / Formal Methods
- Verification / Logic
- Artificial Morality
- Machine Ethics
- Computational Morality
- Autonomous Systems
- Intelligent Systems
- Formal Ethics
- Mathematical Philosophy
- Robot Ethics.