http://www.dagstuhl.de/18121

18. – 23. März 2018, Dagstuhl Seminar 18121

Machine Learning and Model Checking Join Forces

Organisatoren

Nils Jansen (Radboud University Nijmegen, NL)
Joost-Pieter Katoen (RWTH Aachen, DE)
Pushmeet Kohli (Google DeepMind – London, GB)
Jan Kretinsky (TU München, DE)

Auskunft zu diesem Dagstuhl Seminar erteilen

Jutka Gasiorowski zu administrativen Fragen

Andreas Dolzmann zu wissenschaftlichen Fragen

Dokumente

Programm des Dagstuhl Seminars (Hochladen)

(Zum Einloggen bitte Seminarnummer und Zugangscode verwenden)

Motivation

Machine learning. A prominent field related to artificial intelligence (AI) is the one of machine learning (ML). Very broadly, machine learning describes algorithms that independently learn from data that is observed in possibly unknown environments. Applications are - only to mention a few - given in autonomous systems, computer vision, and video games. In the real-world, these previously not well-defined environments carry the risk of taking safety-critical action along the way of obtaining optimal strategies.

Model checking. With a rapidly growing number of computer-controlled systems, the need to ensure that systems are correct is present everywhere. Formal verification is a research area that aims at providing guarantees on system behavior. In particular, model checking is a technique that proves or disproves properties of a system by rigorous exploration of the state space of a system model. Naturally, such verification techniques necessitate well-defined models of a real-world system.

This Dagstuhl Seminar aims at bringing together researchers working in the fields of machine learning and model checking. The spirit of the seminar shall be "Machine Learning and Model Checking Join Forces". We believe that in-depth understanding of each others problems and solutions will help to identify new research topics on the one hand, and to help with current problems on the other hand. Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems. We emphasize the following directions for discussions:

  • Safety-related issues for machine learning
  • Explainability in AI and in model checking
  • Scalability and applicability of formal verification

We identify the following concrete research questions:

  • What does formal verification actually mean for system models that are not well-defined and subject to change? What kind of guarantees can or should be given?
  • How can models and guarantees be robustified against further samples/observations?
  • Many validity problems in machine learning point to the opportunity of employing human feed-back. How can such feedback be incorporated in a verification scheme?
  • Can we use formal verification to train machine learning by identifying corner cases in the form of counterexamples?
  • Towards explainable AI, can verification of probabilistic programs improve the understanding of verification and machine learning results for humans?
  • How can machine learning help to improve scalability of verification techniques?
  • How can machine learning be employed in verification for hard guarantees?
  • Can machine-learning classifiers learn abstractions appropriate for verification analysis?
  • Can machine learning help to interpret verification results? Can error traces or synthesized controllers be mined so that they are presentable in an understandable way?

License
  Creative Commons BY 3.0 DE
  Nils Jansen, Joost-Pieter Katoen, Pushmeet Kohli, and Jan Kretinsky

Classification

  • Artificial Intelligence / Robotics
  • Semantics / Formal Methods
  • Verification / Logic

Keywords

  • Machine learning
  • Artificial Intelligence
  • Formal Methods
  • Formal Verification
  • Logics
  • Cyber-physical Systems
  • Quantitative Verification
  • Model Checking
  • Safety-critical Systems

Buchausstellung

Bücher der Teilnehmer 

Buchausstellung im Erdgeschoss der Bibliothek

(nur in der Veranstaltungswoche).

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.