http://www.dagstuhl.de/18121

March 18 – 23 , 2018, Dagstuhl Seminar 18121

Machine Learning and Model Checking Join Forces

Organizers

Nils Jansen (Radboud University Nijmegen, NL)
Joost-Pieter Katoen (RWTH Aachen, DE)
Pushmeet Kohli (Google DeepMind – London, GB)
Jan Kretinsky (TU München, DE)

For support, please contact

Jutka Gasiorowski for administrative matters

Andreas Dolzmann for scientific matters

Documents

Dagstuhl Seminar Schedule (Upload here)

(Use seminar number and access code to log in)

Motivation

Machine learning. A prominent field related to artificial intelligence (AI) is the one of machine learning (ML). Very broadly, machine learning describes algorithms that independently learn from data that is observed in possibly unknown environments. Applications are - only to mention a few - given in autonomous systems, computer vision, and video games. In the real-world, these previously not well-defined environments carry the risk of taking safety-critical action along the way of obtaining optimal strategies.

Model checking. With a rapidly growing number of computer-controlled systems, the need to ensure that systems are correct is present everywhere. Formal verification is a research area that aims at providing guarantees on system behavior. In particular, model checking is a technique that proves or disproves properties of a system by rigorous exploration of the state space of a system model. Naturally, such verification techniques necessitate well-defined models of a real-world system.

This Dagstuhl Seminar aims at bringing together researchers working in the fields of machine learning and model checking. The spirit of the seminar shall be "Machine Learning and Model Checking Join Forces". We believe that in-depth understanding of each others problems and solutions will help to identify new research topics on the one hand, and to help with current problems on the other hand. Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems. We emphasize the following directions for discussions:

  • Safety-related issues for machine learning
  • Explainability in AI and in model checking
  • Scalability and applicability of formal verification

We identify the following concrete research questions:

  • What does formal verification actually mean for system models that are not well-defined and subject to change? What kind of guarantees can or should be given?
  • How can models and guarantees be robustified against further samples/observations?
  • Many validity problems in machine learning point to the opportunity of employing human feed-back. How can such feedback be incorporated in a verification scheme?
  • Can we use formal verification to train machine learning by identifying corner cases in the form of counterexamples?
  • Towards explainable AI, can verification of probabilistic programs improve the understanding of verification and machine learning results for humans?
  • How can machine learning help to improve scalability of verification techniques?
  • How can machine learning be employed in verification for hard guarantees?
  • Can machine-learning classifiers learn abstractions appropriate for verification analysis?
  • Can machine learning help to interpret verification results? Can error traces or synthesized controllers be mined so that they are presentable in an understandable way?

License
  Creative Commons BY 3.0 DE
  Nils Jansen, Joost-Pieter Katoen, Pushmeet Kohli, and Jan Kretinsky

Classification

  • Artificial Intelligence / Robotics
  • Semantics / Formal Methods
  • Verification / Logic

Keywords

  • Machine learning
  • Artificial Intelligence
  • Formal Methods
  • Formal Verification
  • Logics
  • Cyber-physical Systems
  • Quantitative Verification
  • Model Checking
  • Safety-critical Systems

Book exhibition

Books from the participants of the current Seminar 

Book exhibition in the library, ground floor, during the seminar week.

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support