29.05.16 - 03.06.16, Seminar 16222

Engineering Moral Agents - from Human Morality to Artificial Morality

The following text appeared on our web pages prior to the seminar, and was included as part of the invitation.

Press Room

Motivation

Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms.

Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? It is evident that close interdisciplinary collaboration is necessary. Since robots and artificial beings entered fiction before they became a reality, for people outside artificial intelligence research views on the current and future limits of artificial intelligence are often distorted by science fiction and the popular media. Therefore, without being aware of the state-of-the-art in artificial intelligence and engineering, it is impossible to assess the necessary and sufficient conditions for an artificial intentional entity to be considered an artificial moral agent, and very challenging even with a deep understanding.

We expect the seminar to give researchers across the contributing disciplines an integrated overview of current research in machine morality and related topics. We hope to open up a cross-disciplinary communication channel among researchers tackling artificial morality. We intend to work towards identifying the central research questions and challenges concerning

  1. the definition and operationalization of the concept of moral agency, as it applies to human and non-human systems;
  2. the formalization and algorithmization of ethical theories;
  3. the formal verifiability of machine ethics; and
  4. the regulatory structures that should govern the role of artificial agents and machines in our society.