10. – 15. März 2019, Dagstuhl-Seminar 19112

Engineering Reliable Multiagent Systems


Jürgen Dix (TU Clausthal, DE)
Brian Logan (University of Nottingham, GB)
Michael Winikoff (University of Otago, NZ)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team


Dagstuhl Report, Volume 9, Issue 3 Dagstuhl Report
Dagstuhl's Impact: Dokumente verfügbar
Programm des Dagstuhl-Seminars [pdf]


The multi-disciplinary workshop on Reliable Multiagent Systems attracted 26 leading international scholars from different research fields, incuding theoretical computer science, engineering multiagent systems, machine learning and ethics in artificial intelligence.

This seminar can be seen as a first step to establish a new research agenda for engineering reliable autonomous systems: clarifying the problem, its properties, and their implications for solutions.

In order to move towards a true cross-community research agenda for addressing the overarching challenge of engineering reliable autonomous systems we have chosen a slightly different organization than usual: the seminar was comprised of (short) talks (days 1 and 2), and extensive discussions and dedicated group work (days 3-5).

The first two days were opened by two longer (45 minutes each) tutorials, followed by short "teaser talks" (10+5 minutes) related to the main topic of reliable multiagent systems. Almost all participants gave their view of the topic and highlighted possible contributions. The talks were meant to be less "conference-style", and more inspiring and thought-provoking.

At the end of the second day, we established four working groups to dive deeper into the following questions:

  1. What (detailed) process can be used to identify properties that a particular reliable autonomous system or MAS needs to satisfy?
  2. How can we engineer reliable autonomous systems that include learning?
  3. How can we engineer reliable autonomous systems that include human-machine interaction (including human-software teamwork)?
  4. How can we engineer reliable autonomous systems comprising multiple agents (considering teamwork, collaboration, competitiveness, swarms, ...)?

The groups met on Wednesday and Thursday for extensive discussions and reported back intermediate results in plenary sessions. Participants were encouraged to move between groups to enrich them with their expertise. The seminar concluded on Friday morning with a general discussion where all groups presented their results.

We summarise below the key results from the four discussion groups.

Identifying properties: This group discussed the challenge of identifying requirement properties to be verified. It focused in particular on autonomous systems that replace humans in domains that are subject to regulation, since these are most likely to require and benefit from formal verification.

The group articulated the following arguments:

  • That autonomous systems be viewed in terms of three layers: a continuous control layer at the bottom, a "regulatory" layer in the middle, and an "ethical" layer at the top. The distinction between the regulatory and ethical layers are that the former deals with the expected normal behaviour (e.g. following the standard rules of the domain), whereas the latter deals with reasoning in situations where the rules need to be broken. For example, breaking a road rule given appropriate justification.
  • That for these sorts of systems we can derive verification properties by considering the licencing that is used for humans and how human skills and capabilities are assessed, as well as relevant human capabilities that are assumed, and relevant laws and regulations. The group sketched a high-level process for identifying requirement properties by considering these factors.

The group considered a range of domains, for each one showing how these points would apply.

These ideas were developed into a draft paper during the workshop, and work on this paper has continued subsequently.

Learning in reliable autonomous system: The second group was concerned with methods for engineering reliable autonomous systems that involve learning.

The notion of sufficient reliability varies from domain to domain. For example, in planning of telecommunication networks there are simulators that are trusted to be a good model of reality. Hence the simulation rules could be used for formal verification. In other domains, such as autonomous driving, there is no established trusted model of reality. Assuming a formal model exists and safety properties can be formulated with temporal logic, there are still remaining challenges: complex models with a large state space and hybrid continuous and discrete behavior can make formal verification intractable, especially when the learned policies are equally complex. On the other hand learning methods (e.g. reinforcement learning) often "discover" key strategies that do not depend on all details of the system. The group discussed ideas for abstracting/discretizing transition systems based on learned policies.

Human-machine interaction in reliable autonomous systems: The third group focused on how to engineer reliable human-agent interaction.

For that, the first step was to carve out what it means for human-machine communication to be reliable. Values and norms are definitely involved. Drawing from human communication, being truthful, up-to-date with relevant knowledge and honouring commitments are major parts. Another important building block is transparency: is it always clear, which values are in play, what the agent's purpose is, or what happens with the collected data? The desired result would be reliable outcomes, e.g. through reliably following a protocol, effective communication and getting to a shared understanding. A number of tools and methods to achieve this were identified: stakeholder analysis, plan patterns/activity diagrams, interaction design patterns, appropriate human training, and explainability (i.e. explainable AI) were among the most prominent engineering solutions. This group also concluded their discussions early and distributed themselves among the other groups after that.

Multiple agents in reliable autonomous systems: The fourth group focussed on the challenges of ensuring reliability in systems comprising multiple, possibly heterogeneous, agents interacting in complex ways.

A number of issues emerged from the discussion, including what does it mean for a multiagent system to be "collectively reliable", and what is the relationship between the reliability or otherwise of individual agents and the coordination mechanisms through which they interact, and the collective reliability of the system of a whole. These issues were broken down into more specific engineering challenges, including which languages should be used to express collective reliability properties (which is closely related to the discussion of the first group) and how such properties should be verified, how to engineer reliable coordination mechanisms when we have only partial access to the states of agents, how to decompose and/or distribute the monitoring and control of individual agents (and associated failure recovery) necessary for reliable coordination, how to do all of the above in systems where agents learn (closely related to the discussion of the second group), and, finally, how to allocate responsibility to individual agents when behaviour is not reliable.

A more detailed research agenda for engineering reliable multiagent systems is in preparation, which we plan to publish as a "position paper" in a journal special issue arising from the work at the workshop.

Summary text license
  Creative Commons BY 3.0 Unported license
  Jürgen Dix, Brian Logan, and Michael Winikoff


  • Semantics / Formal Methods
  • Software Engineering
  • Verification / Logic


  • Agent-oriented programming
  • Software and verification methodologies
  • Multi agent systems
  • Reliability


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.