TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 19452

Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable

( Nov 03 – Nov 08, 2019 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/19452

Organizers

Contact


Schedule

Motivation

The recent advances in machine learning have led to unprecedented successes in areas such as computer vision, natural language processing, or medicine. In the future, these technologies promise to revolutionize science and technology by producing everything from self-driving cars to new in- sights into large-scale scientific experiments. However, one of the fundamental challenges in making this vision a reality is that, while it is possible today to create working solutions for complex tasks, the resulting systems are typically black boxes. This means one does not know how or why a certain decision has been reached, whether the input data was sufficient and unbiased, or how robust and reliable the system might be for new data. These challenges are often summarized as a lack of “interpretability” of the models, which leads to a lack of trust in the solutions, which in turn limits or even prevents applications from fully exploiting the potential benefits of machine learning.

In response, the machine learning community as well as virtually all application areas have seen a rapid expansion of research efforts in interpretability and related topics. In the process, visualization, or more generally interactive systems, have become a key component of these efforts since they provide one avenue to exploit expert intuition and hypothesis driven exploration. However, due to the unprecedented speed in which the field is currently progressing, it is difficult for the various communities to maintain a cohesive picture of the state of the art and the open challenges, especially given the extreme diversity of research areas affected. This has led to a certain fragmentation in which different application areas, terminology, and disparate communities can obscure the common goals and research objectives.

This Dagstuhl Seminar aims to alleviate this problem by bringing together various stakeholders to jointly discuss needs, characterize open research challenges, and propose a joint research agenda. In particular, there appear to be three groups of stakeholders: application experts with unmet needs and practical problems; machine learning researchers who are the main source of theoretical advances; and visualization and HCI experts that can devise intuitive representations and exploration frameworks for practical solutions. The goal of this seminar is to bring all three communities together in order to: 1) Assemble an overview of existing approaches and research directions; 2) Understand shared research challenges and current gaps; and 3) Formulate a joint research agenda to guide research in this critical area.

Furthermore, we expect the personal connections which are the hallmark of all Dagstuhl Seminars to provide a force multiplier in future research.

Copyright Enrico Bertini, Peer-Timo Bremer, Daniela Oelke, and Jayaraman Thiagarajan

Summary

The recent advances in machine learning (ML) have led to unprecedented successes in areas such as computer vision and natural language processing. In the future, these technologies promise to revolutionize everything ranging from science and engineering to social studies and policy making. However, one of the fundamental challenges in making these technologies useful, usable, reliable and trustworthy is that they are all driven by extremely complex models for which it is impossible to derive simple (closed-format) descriptions and explanations. Mapping decisions from a learned model to human perceptions and understanding of that world is very challenging. Consequently, a detailed understanding of the behavior of these AI systems remains elusive, thus making it difficult (and sometimes impossible) to distinguish between actual knowledge and artifacts in the data presented to a model. This fundamental limitation should be addressed in order to support model optimization, understand risks, disseminate decisions and findings, and most importantly to promote trust.

While this grand challenge can be partially addressed by designing novel theoretical techniques to validate and reason about models/data, in practice, they are found to be grossly insufficient due to our inability to translate the requirements from real-world applications into tractable mathematical formulations. For example, concerns about AI systems (e.g., biases) are intimately connected to several human factors such as how information is perceived, cognitive biases, etc. This crucial gap has given rise to the field of interpretable machine learning, which at its core is concerned with providing a human user better understanding of the model's logic and behavior. In recent years, the machine learning community, as well as virtually all application areas, have seen a rapid expansion of research efforts in interpretability and related topics. In the process, visualization, or more generally interactive systems, have become a key component of these efforts since they provide one avenue to exploit expert intuition and hypothesis-driven exploration. However, due to the unprecedented speed with which the field is currently progressing, it is difficult for the various communities to maintain a cohesive picture of the state of the art and the open challenges; especially given the extreme diversity of the research areas involved.

The focus of this Dagstuhl Seminar was to convene various stakeholders to jointly discuss needs, characterize open research challenges, and propose a joint research agenda. In particular, three different stakeholders were engaged in this seminar: application experts with unmet needs and practical problems; machine learning researchers who are the main source of theoretical advances; and visualization and HCI experts that can devise intuitive representations and exploration frameworks for practical solutions. Through this seminar, the group of researchers discussed the state of practice, identified crucial gaps and research challenges, and formulated a joint research agenda to guide research in interpretable ML.

Program Overview

The main goal of this Dagstuhl seminar was to discuss the current state and future research directions of interpretable Machine Learning. Because two different scientific communities met, the Machine Learning community and the Visualization community, we started the seminar by discussing and defining important terms and concepts of the field. Afterwards, we split up into working groups to collect answers to the following questions: "Who needs interpretable machine learning? For what task is it needed? Why is it needed?". This step was then followed by a series of application lightning talks (please refer to the abstracts below for details).

On the second day, we had two overview talks, one covering the machine learning perspective on interpretability, and the other one the visualization perspective on the topic. Afterwards, we built working groups to collect research challenges from the presented applications and beyond.

The third day was dedicated to clustering the research challenges into priority research directions. The following priority research directions were identified:

  • Interpreting Learned Features and Learning Interpretable Features
  • Evaluation of Interpretability Methods
  • Evaluation and Model Comparison with Interpretable Machine Learning
  • Uncertainty
  • Visual Encoding and Interactivity
  • Interpretability Methods
  • Human-Centered Design

On Thursday, the priority research directions were further detailed in working groups. We had two rounds of working groups in which 3, respectively 4, priority research challenges were discussed in parallel by the groups according to the following aspects: problem statement, sub-challenges, example applications, and related priority research directions. Furthermore, all research challenges were mapped into descriptive axes of the problem space and the solution space.

On the last day, we designed an overview diagram that helps to communicate the result to the larger scientific community.

Copyright Enrico Bertini, Peer-Timo Bremer, Daniela Oelke, and Jayaraman Thiagarajan

Participants
  • Rushil Anirudh (LLNL - Livermore, US) [dblp]
  • Enrico Bertini (NYU - Brooklyn, US) [dblp]
  • Alexander Binder (Singapore University of Technology and Design, SG) [dblp]
  • Peer-Timo Bremer (LLNL - Livermore, US) [dblp]
  • Mennatallah El-Assady (Universität Konstanz, DE) [dblp]
  • Sorelle Friedler (Haverford College, US) [dblp]
  • Beatrice Gobbo (Polytechnic University of Milan, IT)
  • Nikou Guennemann (Siemens AG - München, DE) [dblp]
  • Nathan Hodas (Pacific Northwest National Lab. - Richland, US) [dblp]
  • Daniel A. Keim (Universität Konstanz, DE) [dblp]
  • Been Kim (Google Brain - Mountain View, US) [dblp]
  • Gordon Kindlmann (University of Chicago, US) [dblp]
  • Sebastian Lapuschkin (Fraunhofer-Institut - Berlin, DE) [dblp]
  • Heike Leitte (TU Kaiserslautern, DE) [dblp]
  • Yao Ming (HKUST - Kowloon, HK) [dblp]
  • Elisabeth Moore (Los Alamos National Laboratory, US) [dblp]
  • Daniela Oelke (Siemens AG - München, DE) [dblp]
  • Steve Petruzza (University of Utah - Salt Lake City, US) [dblp]
  • Maria Riveiro (Univ. of Skövde, SE & Univ. of Jönköning, SE) [dblp]
  • Carlos E. Scheidegger (University of Arizona - Tucson, US) [dblp]
  • Sarah Schulz (Ada Health - Berlin, DE) [dblp]
  • Hendrik Strobelt (MIT-IBM Watson AI Lab - Cambridge, US) [dblp]
  • Simone Stumpf (City, University of London, GB) [dblp]
  • Jayaraman Thiagarajan (LLNL - Livermore, US) [dblp]
  • Jarke J. van Wijk (TU Eindhoven, NL) [dblp]

Classification
  • artificial intelligence / robotics
  • society / human-computer interaction

Keywords
  • Visualization
  • Machine Learning
  • Interpretability