TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24151

Methods and Tools for the Engineering and Assurance of Safe Autonomous Systems

( Apr 07 – Apr 12, 2024 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24151

Organizers

Contact

Dagstuhl Reports

As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.

  • Upload (Use personal credentials as created in DOOR to log in)

Dagstuhl Seminar Wiki

Shared Documents

Schedule
  • Upload (Use personal credentials as created in DOOR to log in)

Motivation

Autonomous systems are intended to operate without human intervention over prolonged time periods, perceive their operating environment, and adapt to changes – while pursuing defined goals or generating new ones. The perception functions process the inputs of various sensors and generate an internal model of the operating environment. By relying on this model, the decision functions plan and execute the actions required to achieve the goals of the mission.

To achieve safety for an autonomous system, the engineers should ensure that the perception functions can sufficiently accurately build the model of the environment, i.e., perception and establishing a context for prediction are reliable. They also seek to ensure that the planned actions are safe, i.e., decisions do not result in actions that endanger humans or other agents in the operating environment.

Both sensing and decision usually rely on Artificial Intelligence (AI), in particular Machine Learning (ML). The problem of safe AI has received a significant amount of research and industrial attention over the last few years, but there has been a divergence in the approaches taken by the safety and the ML communities. Moreover, it has become clear that the safety assurance problems cannot be resolved by improving the ML algorithms alone. Hence, the research communities should collaborate in creating methods and tools enabling a holistic approach to safety of autonomous systems. It is increasingly acknowledged that there needs to be work on ML methods, e.g. explainability to make algorithms transparent, predictable updates (learning without forgetting), and other areas. This should be complemented by a systems approach enabling safe autonomy through an integration of dedicated architectural, modelling, verification and validation as well as assurance methods.

Clearly, the engineering and assurance of safe autonomous systems require more fundamental research work that goes well beyond efforts of near-term industry deployment. In particular, we should address such open research problems as building a robust world model; creating resilient architectures enabling graceful degradation and fail-operational behavior; making safety assurances for high-consequence long-tail events; and establishing ways to measure and regulate safety for learning-enabled systems. To develop a holistic view on the safety of autonomous systems, we are planning to discuss, systematize and integrate these problems during our seminar.

This Dagstuhl Seminar aims at bringing together researchers and practitioners from safety engineering, systems and software engineering, modelling, verification and validation, machine learning, robotics, and autonomous systems to identify the state-of-the-art and key research and industrial challenges in engineering safe autonomous systems and defining the research roadmap for safe autonomy.

Copyright Ignacio J. Alvarez, Philip Koopman, John McDermid, Mario Trapp, and Elena Troubitsyna

Participants

Classification
  • Artificial Intelligence
  • Logic in Computer Science
  • Software Engineering

Keywords
  • safety-critical autonomous systems
  • software engineering
  • simulation-based verification and validation
  • safety assurance
  • AI