Dagstuhl-Seminar 23492
Model Learning for Improved Trustworthiness in Autonomous Systems
( 03. Dec – 08. Dec, 2023 )
Permalink
Organisatoren
- Ellen Enkel (Universität Duisburg-Essen, DE)
- Nils Jansen (Ruhr-Universität Bochum, DE)
- Mohammad Reza Mousavi (King's College London, GB)
- Kristin Yvonne Rozier (Iowa State University - Ames, US)
Kontakt
- Marsha Kleinbauer (für wissenschaftliche Fragen)
- Simone Schilke (für administrative Fragen)
Gemeinsame Dokumente
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Programm
This report documents the program and the outcomes of Dagstuhl Seminar 23492 “Model Learning for Improved Trustworthiness in Autonomous Systems”. Autonomous systems increasingly enter our everyday life. Consequently, there is a strong need for safety, correctness, trust, and explainability. Well-defined models with clear semantics pose a convenient way to address these requirements. The area of model learning provides a structured way to obtain models from data. However, autonomous systems operate in the real world and pose challenges that go beyond the state-of-the-art in model learning. The technical challenges addressed in the seminar are system evolution and adaptations, learning heterogeneous models (addressing aspects such as discrete and continuous behaviours, stochastic, and epistemic uncertainty), and compositional learning. Our vision is that model learning is a key enabler solving the bottleneck of lack of specifications and models in various typical applications and hence, our seminar addressed fundamental challenges to enable impact in a number of application areas. In the seminar we brought together experts in (1) the domain of trust and technology acceptance, (2) the technical methods of model learning, and (3) the applications of model learning in robotics and autonomous systems. The first area includes domain experts in technology management, psychology, and trust; Technical methods include automata learning, synthesis of logical specifications, statistical model learning, machine learning, system identification, and process mining. Application experts include validation and verification, transparency and trust, and explainability, as well as experts in their application in robotics (planning, physical design and validation) and autonomous systems. With this seminar, we actively encouraged the interaction between experts and young researchers in the interdisciplinary areas of artificial intelligence, software engineering, autonomous systems, and human factors both from academia and industry. Content-wise, we emphasized the following directions: model learning techniques for AI-enabled autonomous systems: This involves recent techniques for learning models of evolving and variability-intensive systems; as well as application of model-learning to increase transparency and trust in robotics and autonomous system.
We discussed the following technical research questions during the seminar:
- How can we efficiently learn about system evolution and adaptation?
- How can we learn heterogeneous models, possibly by separating orthogonal concerns?
- How can we scale the mode learning?
During the discussion four research questions and working groups emerged, that captured their discussion in scientific papers. The following is a short abstract of each paper that is currently in development.
Working group 1: Foundations of Learned Model Validation in the Age of AI
Models serve as the fundamental basis for the design, synthesis, verification, and implementation of software systems, yet before we can use the model for any of these, we must validate the model against the expectations on the system and/or against the real behavior of the system. In many development paradigms, an emerging trend is to move away from entirely human-designed models to models learned using automated techniques. We contribute a concrete roadmap for validating learned behavioral models comprised by a range of popular components. We pinpoint the current limits of model validation, provide insight into the reasons behind these limitations, and identify challenges that should serve as targets for future research. By means of a running example of a cruise controller with different techniques for model learning, we show how guarantees derived from these techniques interplay with the validation challenges.
Working group 2: Mental Models, Human Models, Trust
Transposing the notion of interpersonal trust into the field of Computer Science, leads to the assumption that a high level of trust might also be pivotal in Human-Computer-Interaction (and even in interactions between two autonomous systems), since it enables the trustor (whether human or non-human) to make better predictions about the trustee. However, whereas humans possess an inherent ``trust module,'' non-human agents lack such a component. Addressing this challenge, the present paper proposes a framework formalizing the trust relationship between a human and an autonomous system, aimed at creating more trustworthy Human-Computer-Interactions.
Working group 3: Research Agenda for Active Automata Learning
We conduct a survey of active automata learning methods, focusing on the different application scenarios (application domains, environment, and desirable guarantees) and the overarching goals that stem from them. We identify the challenges to achieve these goals. We organize a (short) bibliographic study highlighting the state-of-the-art and the technical challenges that are derived from the general goals and give some elements of answers related to these challenges.
Working group 4: Dynamic Interaction of Trust and Trustworthiness in AI-Enabled Systems
Trust is a user-centered notion, while trustworthiness pertains to the properties of the system. They dynamically influence each other, and interact with each other. We focus on AI-enabled systems, where establishing trustworthiness is challenging. In this paper we propose a framework for assessing trust and trustworthiness, and their alignment (calibration). We investigate factors that can influence them. We draw two case studies to illustrate our framework, and derive recommendations based on the insights we gain.
Besides interesting discussions, the four working groups focused on creating scientific articles, capturing their thoughts and insights. As a result, the four articles will be submitted to a special issue on Trust and Trustworthiness in Autonomous Systems International Journal of Software Tools for Technology Transfer (JSTTT) in 2024. Additionally, the organizers of this Dagstuhl Seminar will organize a track at the (A)ISoLa conference of October and November 2024 in Greece to deepen the discussion with the Dagstuhl attendees and with additional experts on this topic. Post-conference proceedings will document the insights~gained.
Autonomous systems increasingly enter our everyday life. Consequently, there is a strong need for safety, correctness, trust, and explainability. Well-defined models with clear semantics pose a convenient way to address these requirements. The area of model learning provides a structured way to obtain models from data. However, autonomous systems operate in the real world and pose challenges that go beyond the state-of-the-art in model learning.
The technical challenges addressed in this Dagstuhl Seminar are system evolution and adaptations, learning heterogeneous models (addressing aspects such as discrete and continuous behaviors, stochastic, and epistemic uncertainty), and compositional learning. Our vision is that model learning is a key enabler solving the bottleneck of lack of specifications and models in various typical applications and hence, our seminar will address fundamental challenges to enable impact in a number of application areas.
We will bring together experts in (1) the domain of robotic and autonomous systems, (2) the technical methods of model learning, and (3) the applications of model learning. These include domain experts in robotics (planning, physical design and validation) and autonomous systems. Technical methods include automata learning, synthesis of logical specifications, statistical model learning, machine learning, system identification, and process mining. Application experts include validation and verification, transparency and trust, and explainability.
With this Dagstuhl Seminar, we want to actively encourage the interaction between experts and young researchers in the interdisciplinary areas of artificial intelligence, software engineering, autonomous systems, and human factors both from academia and industry. Contentwise, we emphasize the following directions:
- model learning techniques for AI-enabled autonomous systems: This involves recent techniques for learning models of evolving and variability-intensive systems;
- application of model-learning to increase transparency and trust in robotics and autonomous system.
We identify the following technical and multi-disciplinary research questions:
- How can we efficiently learn about system evolution and adaptation?
- How can we learn heterogeneous models, possibly by separating orthogonal concerns?
- How can we scale the mode learning?
- How can adaptive model learning be used to focus the validation and verification effort in evolving systems?
- How can learn model contribute to trust in autonomous systems?
- What types of models can be used to provide understandable explanations for AI-enabled and autonomous systems?
- Alessandro Abate (University of Oxford, GB) [dblp]
- Wolfgang Ahrendt (Chalmers University of Technology - Göteborg, SE) [dblp]
- Bernhard K. Aichernig (TU Graz, AT) [dblp]
- Thomas Arts (QuviQ AB - Gothenburg, SE) [dblp]
- Thorsten Berger (Ruhr-Universität Bochum, DE) [dblp]
- Jyotirmoy Deshmukh (USC - Los Angeles, US) [dblp]
- Monique Dittrich (CARIAD - Berlin, DE) [dblp]
- Ellen Enkel (Universität Duisburg-Essen, DE) [dblp]
- Sophie Fortz (University of Namur, BE) [dblp]
- Fatemeh Ghassemi Esfahani (University of Tehran, IR) [dblp]
- Heiko Hecht (Universität Mainz, DE) [dblp]
- Leo Henry (University College London, GB) [dblp]
- Robert M. Hierons (University of Sheffield, GB) [dblp]
- Falk Howar (TU Dortmund, DE) [dblp]
- Nils Jansen (Ruhr-Universität Bochum, DE) [dblp]
- Effie Lai-Chong Law (Durham University, GB) [dblp]
- Magnus Liebherr (Universität Duisburg-Essen, DE) [dblp]
- Mohammad Reza Mousavi (King's College London, GB) [dblp]
- Thomas Neele (TU Eindhoven, NL) [dblp]
- Nicola Paoletti (King's College London, GB) [dblp]
- Jurriaan Rot (Radboud University Nijmegen, NL) [dblp]
- Kristin Yvonne Rozier (Iowa State University - Ames, US) [dblp]
- Matteo Sammartino (Royal Holloway, University of London, GB) [dblp]
- Philipp Sieberg (Schotte Automotive GmbH & Co KG - Hattingen, DE) [dblp]
- Bernhard Steffen (TU Dortmund, DE) [dblp]
- Marnix Suilen (Radboud University Nijmegen, NL) [dblp]
- Neil Walkinshaw (University of Sheffield, GB) [dblp]
Klassifikation
- Computers and Society
- Logic in Computer Science
- Machine Learning
Schlagworte
- Autonomous Systems
- Machine Learning
- Artificial Intelligence
- Formal Methods
- Automata Learning
- Software Evolution
- Trust
- Technology Acceptance
- Safety
- Self-Adaptive Systems
- Cyber-physical Systems
- Safety-critical Systems