https://www.dagstuhl.de/18121
March 18 – 23 , 2018, Dagstuhl Seminar 18121
Machine Learning and Model Checking Join Forces
Organizers
Nils Jansen (Radboud University Nijmegen, NL)
Joost-Pieter Katoen (RWTH Aachen, DE)
Pushmeet Kohli (Google DeepMind – London, GB)
Jan Kretinsky (TU München, DE)
For support, please contact
Documents
Dagstuhl Report, Volume 8, Issue 3
Aims & Scope
List of Participants
Summary
This Dagstuhl Seminar aimed at bringing together researchers working in the fields of machine learning and model checking. Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems.
Based on the concrete discussions and inputs from all the participants, we identified the following topics as great challenges to the combination of the fields of machine learning and model checking.
- Safety Verification of Deep Neural Networks
- Formal Program Synthesis and Analysis using Machine Learning
- Representation of Strategies and Controllers
- Explainable Artificial Intelligence
- Challenges for Machine Learning in Motion Planning
- Guarantees on Reinforcement Learning in Verification
- Social and Legal Issues in Artificial Intelligence
- Exploiting Weaknesses in Reinforcement Learning


Classification
- Artificial Intelligence / Robotics
- Semantics / Formal Methods
- Verification / Logic
Keywords
- Machine learning
- Artificial Intelligence
- Formal Methods
- Formal Verification
- Logics
- Cyber-physical Systems
- Quantitative Verification
- Model Checking
- Safety-critical Systems