TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 19112

Engineering Reliable Multiagent Systems

( 10. Mar – 15. Mar, 2019 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/19112

Organisatoren

Kontakt



Programm

Motivation

There is increasing interest in the application of autonomous systems technology in areas such as driverless cars, UAVs, manufacturing, healthcare, personal assistants, etc. Robotics and Autonomous Systems have been identified as one of the Eight Great Technologies with the potential to revolutionise our economy and society. For example, it has been claimed that the `economic, cultural, environmental and social impacts and benefits [of autonomous systems] will be unprecedented'.

However, widespread deployment and the consequent benefits of autonomous systems will only be achieved if they can be shown to operate reliably. Reliable operation is essential for public and regulatory acceptance, as well as the myriad of societal changes necessary for their widespread deployment (e.g., liability insurance).

Autonomous systems can be viewed as a particular kind of (multi)agent system, where the focus is on the problem of achieving flexible intelligent behaviour in dynamic and unpredictable environments. Demonstrating that such a system will operate reliably is an extremely challenging problem. The potential “behaviour space” of many systems (e.g., robots for care of the elderly) is vastly larger than that addressed by current approaches to engineering reliable systems. Multiagent/autonomous systems are implicitly expected to be able to “do the right thing” in the face of conflicting objectives and in complex, ill-structured environments. Addressing these challenges cannot be achieved by incremental improvements to existing software engineering and verification methodologies, but will require step changes in how we specify, engineer, test and verify systems.

The expertise necessary for building reliable autonomous systems is currently distributed across a number of communities including:

  • software engineering focussed on autonomous systems, e.g., robots, intelligent agents and multiagent systems;
  • software verification; and
  • subareas of AI, like ethics, and machine learning AI, to deal with (self-) adaptive systems.

In addition, the development of a research agenda should be informed by real industrial applications of autonomous systems.

This Dagstuhl Seminar should be looked at as a first step to establish a new research agenda for engineering reliable autonomous systems: clarifying the problem, its properties, and their implications for solutions.

Copyright Jürgen Dix, Brian Logan, and Michael Winikoff

Summary

The multi-disciplinary workshop on Reliable Multiagent Systems attracted 26 leading international scholars from different research fields, incuding theoretical computer science, engineering multiagent systems, machine learning and ethics in artificial intelligence.

This seminar can be seen as a first step to establish a new research agenda for engineering reliable autonomous systems: clarifying the problem, its properties, and their implications for solutions.

In order to move towards a true cross-community research agenda for addressing the overarching challenge of engineering reliable autonomous systems we have chosen a slightly different organization than usual: the seminar was comprised of (short) talks (days 1 and 2), and extensive discussions and dedicated group work (days 3-5).

The first two days were opened by two longer (45 minutes each) tutorials, followed by short "teaser talks" (10+5 minutes) related to the main topic of reliable multiagent systems. Almost all participants gave their view of the topic and highlighted possible contributions. The talks were meant to be less "conference-style", and more inspiring and thought-provoking.

At the end of the second day, we established four working groups to dive deeper into the following questions:

  1. What (detailed) process can be used to identify properties that a particular reliable autonomous system or MAS needs to satisfy?
  2. How can we engineer reliable autonomous systems that include learning?
  3. How can we engineer reliable autonomous systems that include human-machine interaction (including human-software teamwork)?
  4. How can we engineer reliable autonomous systems comprising multiple agents (considering teamwork, collaboration, competitiveness, swarms, ...)?

The groups met on Wednesday and Thursday for extensive discussions and reported back intermediate results in plenary sessions. Participants were encouraged to move between groups to enrich them with their expertise. The seminar concluded on Friday morning with a general discussion where all groups presented their results.

We summarise below the key results from the four discussion groups.

Identifying properties: This group discussed the challenge of identifying requirement properties to be verified. It focused in particular on autonomous systems that replace humans in domains that are subject to regulation, since these are most likely to require and benefit from formal verification.

The group articulated the following arguments:

  • That autonomous systems be viewed in terms of three layers: a continuous control layer at the bottom, a "regulatory" layer in the middle, and an "ethical" layer at the top. The distinction between the regulatory and ethical layers are that the former deals with the expected normal behaviour (e.g. following the standard rules of the domain), whereas the latter deals with reasoning in situations where the rules need to be broken. For example, breaking a road rule given appropriate justification.
  • That for these sorts of systems we can derive verification properties by considering the licencing that is used for humans and how human skills and capabilities are assessed, as well as relevant human capabilities that are assumed, and relevant laws and regulations. The group sketched a high-level process for identifying requirement properties by considering these factors.

The group considered a range of domains, for each one showing how these points would apply.

These ideas were developed into a draft paper during the workshop, and work on this paper has continued subsequently.

Learning in reliable autonomous system: The second group was concerned with methods for engineering reliable autonomous systems that involve learning.

The notion of sufficient reliability varies from domain to domain. For example, in planning of telecommunication networks there are simulators that are trusted to be a good model of reality. Hence the simulation rules could be used for formal verification. In other domains, such as autonomous driving, there is no established trusted model of reality. Assuming a formal model exists and safety properties can be formulated with temporal logic, there are still remaining challenges: complex models with a large state space and hybrid continuous and discrete behavior can make formal verification intractable, especially when the learned policies are equally complex. On the other hand learning methods (e.g. reinforcement learning) often "discover" key strategies that do not depend on all details of the system. The group discussed ideas for abstracting/discretizing transition systems based on learned policies.

Human-machine interaction in reliable autonomous systems: The third group focused on how to engineer reliable human-agent interaction.

For that, the first step was to carve out what it means for human-machine communication to be reliable. Values and norms are definitely involved. Drawing from human communication, being truthful, up-to-date with relevant knowledge and honouring commitments are major parts. Another important building block is transparency: is it always clear, which values are in play, what the agent's purpose is, or what happens with the collected data? The desired result would be reliable outcomes, e.g. through reliably following a protocol, effective communication and getting to a shared understanding. A number of tools and methods to achieve this were identified: stakeholder analysis, plan patterns/activity diagrams, interaction design patterns, appropriate human training, and explainability (i.e. explainable AI) were among the most prominent engineering solutions. This group also concluded their discussions early and distributed themselves among the other groups after that.

Multiple agents in reliable autonomous systems: The fourth group focussed on the challenges of ensuring reliability in systems comprising multiple, possibly heterogeneous, agents interacting in complex ways.

A number of issues emerged from the discussion, including what does it mean for a multiagent system to be "collectively reliable", and what is the relationship between the reliability or otherwise of individual agents and the coordination mechanisms through which they interact, and the collective reliability of the system of a whole. These issues were broken down into more specific engineering challenges, including which languages should be used to express collective reliability properties (which is closely related to the discussion of the first group) and how such properties should be verified, how to engineer reliable coordination mechanisms when we have only partial access to the states of agents, how to decompose and/or distribute the monitoring and control of individual agents (and associated failure recovery) necessary for reliable coordination, how to do all of the above in systems where agents learn (closely related to the discussion of the second group), and, finally, how to allocate responsibility to individual agents when behaviour is not reliable.

A more detailed research agenda for engineering reliable multiagent systems is in preparation, which we plan to publish as a "position paper" in a journal special issue arising from the work at the workshop.

Copyright Jürgen Dix, Brian Logan, and Michael Winikoff

Teilnehmer
  • Tobias Ahlbrecht (TU Clausthal, DE) [dblp]
  • Stefano V. Albrecht (University of Edinburgh, GB) [dblp]
  • Natasha Alechina (University of Nottingham, GB) [dblp]
  • Rem Collier (University College Dublin, IE) [dblp]
  • Mehdi Dastani (Utrecht University, NL) [dblp]
  • Louise A. Dennis (University of Liverpool, GB) [dblp]
  • Frank Dignum (Utrecht University, NL) [dblp]
  • Virginia Dignum (University of Umeå, SE) [dblp]
  • Jürgen Dix (TU Clausthal, DE) [dblp]
  • Niklas Fiekas (TU Clausthal, DE) [dblp]
  • Michael Fisher (University of Liverpool, GB) [dblp]
  • Koen V. Hindriks (University of Amsterdam, NL) [dblp]
  • Alexander Birch Jensen (Technical University of Denmark - Lyngby, DK) [dblp]
  • Malte S. Kließ (TU Delft, NL) [dblp]
  • Yves Lesperance (York University - Toronto, CA) [dblp]
  • Brian Logan (University of Nottingham, GB) [dblp]
  • Viviana Mascardi (University of Genova, IT) [dblp]
  • Ann Nowé (Free University of Brussels, BE) [dblp]
  • Alessandro Ricci (Università di Bologna, IT) [dblp]
  • Kristin Yvonne Rozier (Iowa State University, US) [dblp]
  • Holger Schlingloff (HU Berlin, DE) [dblp]
  • Marija Slavkovik (University of Bergen, NO) [dblp]
  • Kagan Tumer (Oregon State University, US) [dblp]
  • Michael Winikoff (University of Otago, NZ) [dblp]
  • Neil Yorke-Smith (TU Delft, NL) [dblp]

Klassifikation
  • semantics / formal methods
  • software engineering
  • verification / logic

Schlagworte
  • agent-oriented programming
  • software and verification methodologies
  • multi agent systems
  • reliability