TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 18171

Normative Multi-Agent Systems

( Apr 22 – Apr 27, 2018 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/18171

Organizers

Contact


Motivation

This seminar is the fifth edition in a series that started in 2007. The aim is to bring together researchers from various scientific disciplines such as computer science, artificial intelligence, philosophy, law, cognitive science and social sciences to discuss the emerging topic concerning the responsibility of autonomous systems.

The seminar will build on the results of the previous editions where the concepts of norm, norm compliance, norm violation, and norm enforcement have been discussed and analysed. While the previous editions concentrated on how norms can be enforced on multi-agent systems to ensure compliant behaviors, or to respond to violating behaviours, this edition will focus solely on the concept of responsibility.

Autonomous software systems and multi-agent systems (MAS) in open environments require methodologies, models and tools to analyse and develop flexible control and coordination mechanism – without them, it is not possible to steer the behaviour and interaction of such systems and to ensure important overall properties.

Normative Multi-Agent Systems is an established area focussing on how norms can be used to control and coordinate autonomous systems and multi-agents systems without restricting the autonomy of the involved systems. Such control and coordination systems allow autonomous systems to violate norms, but respond to norm violations by means of various sanctioning mechanisms.

This seminar focuses on how the responsibility of autonomous systems and multi-agend systems can be defined, modelled, analysed and computed. The participants come from the disciplines mentioned above and reflect the interdisciplinary nature of the seminar.

Copyright Mehdi Dastani, Jürgen Dix, Harko Verhagen, and Serena Villata

Summary

The multi-disciplinary workshop on Normative Multi-Agent Systems attracted leading international scholars from different research fields (e.g. theoretical computer science, programming languages, cognitive sciences, law, and social sciences).

The seminar was a blend of talks, discussions and group work. It began on the first day with short "teaser talks" (10+5 minutes) related to the main topic of norms and responsibility, one given by almost each participant. The talks were meant to be inspiring and thought-provoking, channeling ideas for the following days. While some missed the established procedure with longer talks, the new format was overall very well received and allowed for many different thoughts and concepts to be presented and discussed in relatively short time.

Four working groups formed at the end of the first day for the norm-related topics responsibility, new logics, ethics/values and (machine) learning.

The aim of the group sessions, on the second and fourth day, was to get a shared understanding of the specific topics and to identify future research possibilities. Each group reported back in a plenary session at the end of each group work day, where the groups also tried to establish interconnections between them.

Responsibility. This group discussed how to grasp the very abstract concept of responsibility. A big chunk was dedicated to the formalization of responsibility. Many (vastly different) assumptions were laid out. The problem of "delegating responsibility" was discussed with special intensity. The group (being by far the largest one) split later to discuss different notions of responsibility on the basis of selected examples. A working paper was produced, included in this report under Section~ ef{Sec:resp}.

New logics. The aim of this group was to find out how to tackle norms and responsibility in terms of logics, especially how new logics for this task could be devised.

Ethics/values. This group discussed the more ethics-oriented aspects of normative systems. Values provide an additional layer for normative reasoning: e.g. "how acceptable is it to violate a given norm?" The group produced a draft of a paper on "The Value(s) of Water" connecting NorMAS to the AI for Good initiative. Work is planned to continue during 2018 resulting in a paper for publication, e.g. in ACM communications or a similar outlet.

(Machine) Learning. The learning group discussed the opportunity of integrating norms and responsibility into machine learning procedures. As those are usually opaque, this presents as a notable challenge. For example, the learning's input data has to be pre-processed to get a normatively acting system. Also, the learned sub-symbolic system should be enhanced with "regular" symbolic reasoning, which can be better regulated by norms and analysed for responsibility.

The fourth day was further enriched by a brainstorming session to identify possible applications. The subsequent clustering revealed the topics

  • transport, e.g. smart grid/home, intelligent cars,
  • tools, e.g. for autonomous service composition, legal reasoning, or supporting software/requirements engineering,
  • climate & agriculture, e.g. agents negotiating fertilizer and water use, or an app that helps monitoring personal climate-affecting activities,
  • societies, e.g. norms improving sustainability, monitoring of online forums for bad behavior or hate speech detection,
  • security, e.g. protecting personal freedom by dynamically analysing normative consequences of law proposals, monitoring a company's compliance with EU regulations, improving access to restricted access datasets, or making societies resilient for data surveillance by means of contract negotiations,
  • health, e.g. ethical decision-making, norms for improving personal health and fitness, defining wellbeing by norms, handling of patient/health data, and a big interest in healthcare robots,
  • energy, e.g. modelling energy security with norms, managing air quality, observing long-term consequences, agents monitoring (personal) energy use to identify bad behavior, or regulating industrial relations or the energy and material footprint.

The application areas were discussed in a plenary session and formed the input to the discussion on future plans for the NorMAS community. Several conferences were identified to target proposals for a NorMAS-related workshop as part of the event. The community sees many relevant application areas not in the least in autonomous internet services and physical agents susch as robots, vehicles and drones, where social reasoning will be of the utmost importance. Bringing the work from NorMAS to these areas will be highly benificial to the involved communities.

Copyright Mehdi Dastani, Jürgen Dix, and Harko Verhagen

Participants
  • Tobias Ahlbrecht (TU Clausthal, DE) [dblp]
  • Natasha Alechina (University of Nottingham, GB) [dblp]
  • Kevin D. Ashley (University of Pittsburgh, US) [dblp]
  • Matteo Baldoni (University of Turin, IT) [dblp]
  • Christoph Benzmüller (FU Berlin, DE) [dblp]
  • Célia da Costa Pereira (Laboratoire I3S - Sophia Antipolis, FR) [dblp]
  • Mehdi Dastani (Utrecht University, NL) [dblp]
  • Tiago de Lima (CNRS - Lens, FR) [dblp]
  • Davide Dell'Anna (Utrecht University, NL) [dblp]
  • Jürgen Dix (TU Clausthal, DE) [dblp]
  • Ali Farjami (University of Luxembourg, LU) [dblp]
  • Dov M. Gabbay (King's College London, GB) [dblp]
  • Aditya K. Ghose (University of Wollongong, AU) [dblp]
  • Matthias Grabmair (Carnegie Mellon University - Pittsburgh, US) [dblp]
  • Joris Hulstijn (Tilburg University, NL) [dblp]
  • Wojtek Jamroga (Polish Academy of Sciences - Warsaw, PL) [dblp]
  • Özgür Kafali (University of Kent - Canterbury, GB) [dblp]
  • Sabrina Kirrane (Wirtschaftsuniversität Wien, AT) [dblp]
  • Brian Logan (University of Nottingham, GB) [dblp]
  • Emiliano Lorini (University of Toulouse, FR) [dblp]
  • Martin Neumann (Jacobs University Bremen, DE) [dblp]
  • Pablo Noriega (IIIA - CSIC - Barcelona, ES) [dblp]
  • Julian Padget (University of Bath, GB) [dblp]
  • Adrian Paschke (FU Berlin, DE) [dblp]
  • Nicolas Payette (LABSS - ISTC - CNR - Rome, IT) [dblp]
  • Ken Satoh (National Institute of Informatics - Tokyo, JP) [dblp]
  • Matthias Scheutz (Tufts University - Medford, US) [dblp]
  • Viviane Torres da Silva (IBM Research - Rio de Janeiro, BR) [dblp]
  • Leon van der Torre (University of Luxembourg, LU) [dblp]
  • Harko Verhagen (Stockholm University, SE) [dblp]
  • Douglas Walton (University of Windsor, CA) [dblp]
  • Michael Winikoff (University of Otago, NZ) [dblp]
  • Vahid Yazdanpanah (University of Twente, NL) [dblp]

Related Seminars
  • Dagstuhl Seminar 07122: Normative Multi-agent Systems (2007-03-18 - 2007-03-23) (Details)
  • Dagstuhl Seminar 09121: Normative Multi-Agent Systems (2009-03-15 - 2009-03-20) (Details)
  • Dagstuhl Seminar 12111: Normative Multi-Agent Systems (2012-03-11 - 2012-03-16) (Details)
  • Dagstuhl Seminar 15131: Normative Multi-Agent Systems (2015-03-22 - 2015-03-27) (Details)

Classification
  • artificial intelligence / robotics
  • semantics / formal methods
  • software engineering

Keywords
  • Responsibility
  • Autonomous Systems
  • Norm-based systems
  • Control and Coordination