March 10 – 15 , 2019, Dagstuhl Seminar 19112

Engineering Reliable Multiagent Systems


Jürgen Dix (TU Clausthal, DE)
Brian Logan (University of Nottingham, GB)
Michael Winikoff (University of Otago, NZ)

For support, please contact

Dagstuhl Service Team


List of Participants
Shared Documents
Dagstuhl Seminar Schedule [pdf]


There is increasing interest in the application of autonomous systems technology in areas such as driverless cars, UAVs, manufacturing, healthcare, personal assistants, etc. Robotics and Autonomous Systems have been identified as one of the Eight Great Technologies with the potential to revolutionise our economy and society. For example, it has been claimed that the `economic, cultural, environmental and social impacts and benefits [of autonomous systems] will be unprecedented'.

However, widespread deployment and the consequent benefits of autonomous systems will only be achieved if they can be shown to operate reliably. Reliable operation is essential for public and regulatory acceptance, as well as the myriad of societal changes necessary for their widespread deployment (e.g., liability insurance).

Autonomous systems can be viewed as a particular kind of (multi)agent system, where the focus is on the problem of achieving flexible intelligent behaviour in dynamic and unpredictable environments. Demonstrating that such a system will operate reliably is an extremely challenging problem. The potential “behaviour space” of many systems (e.g., robots for care of the elderly) is vastly larger than that addressed by current approaches to engineering reliable systems. Multiagent/autonomous systems are implicitly expected to be able to “do the right thing” in the face of conflicting objectives and in complex, ill-structured environments. Addressing these challenges cannot be achieved by incremental improvements to existing software engineering and verification methodologies, but will require step changes in how we specify, engineer, test and verify systems.

The expertise necessary for building reliable autonomous systems is currently distributed across a number of communities including:

  • software engineering focussed on autonomous systems, e.g., robots, intelligent agents and multiagent systems;
  • software verification; and
  • subareas of AI, like ethics, and machine learning AI, to deal with (self-) adaptive systems.

In addition, the development of a research agenda should be informed by real industrial applications of autonomous systems.

This Dagstuhl Seminar should be looked at as a first step to establish a new research agenda for engineering reliable autonomous systems: clarifying the problem, its properties, and their implications for solutions.

  Creative Commons BY 3.0 DE
  Jürgen Dix, Brian Logan, and Michael Winikoff


  • Semantics / Formal Methods
  • Software Engineering
  • Verification / Logic


  • Agent-oriented programming
  • Software and verification methodologies
  • Multi agent systems
  • Reliability


In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.


Download overview leaflet (PDF).


Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support