TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 22351

Interactive Visualization for Fostering Trust in ML

( 28. Aug – 02. Sep, 2022 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/22351

Organisatoren

Kontakt



Programm

Summary

Artificial intelligence (AI), and in particular machine learning (ML) algorithms, are of increasing importance in many application areas. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. All major industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published some form of Guidelines for the Use of AI. While it is clear that the level of trust in AI systems does not only depend on technical but many other factors, including sociological and psychological factors, interactive visualization is one of the technologies that has strong potential to increase trust into AI systems. In our Dagstuhl Seminar, we discussed the requirements for trustworthy AI systems including sociological and psychological aspects as well as the technological possibilities provided by interactive visualizations to increase human trust in AI. As a first step, we identified the factors influencing the organizational and sociological as well as psychological aspects of AI and partitioned them into relationship-based and evidence-based aspects. Next, we collected measures that may be used to approximate these aspects, such as interaction logs, eye tracking, and EEG. We also discussed the mechanisms to calibrate trust and their potential misuse. Finally, we considered the role that visualizations play in increasing trust in AI systems. This includes questions such as: Which mechanisms exist to make AI systems trustworthy? How can interactive visualizations contribute? Under which circumstances are interactive visualizations the decisive factor for enabling responsible AI? And what are the research challenges that still have to be solved -- in the area of machine learning or interactive visualization - to leverage this potential in real world applications?

The seminar started with four keynote talks by experts in cognitive psychology, sociology, AI, and visualization, to provide participants with diverse perspectives that helped seed discussion topics. Then, the group decided to build 6 smaller groups to discuss the individual topics that should be worked on during the rest of the week. The six groups collectively came up with a longer list of potential topics surrounding the aspects of trust and machine learning. This list was voted on the plenum to distill it to the following four breakout groups: (1) Good practices and evil knobs in machine learning; (2) Evaluation, measures and metrics for trust in ML; (3) Interaction, expectations and dimension reduction; and (4) Definitions, taxonomy and relationships of trust in ML.

The outcome of this seminar is a better understanding of which aspects of trust have to be considered in fostering trust in AI systems and how interactive visualizations can help foster trust in artificial intelligence systems by making them more understandable and responsible. This will encourage innovative research and help to start joint research projects tackling the issue. Concrete outcomes are drafts of position papers describing the findings of the seminar and in particular, the research challenges identified in the seminar.

Copyright Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke

Motivation

Artificial intelligence, and in particular machine learning algorithms, are of increasing importance in many application areas. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. All major industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published some form of Guidelines for the Use of AI.

While it is clear that the level of trust in AI systems does not only depend on technical but many other factors, including sociological and psychological factors, interactive visualization is one of the technologies that has strong potential to increase trust into AI systems. In our Dagstuhl Seminar, we want to comprehensively discuss the requirements for trustworthy AI systems including sociological and psychological aspects as well as the technological possibilities provided by interactive visualizations to increase human trust in AI. As a first step, we will identify the factors influencing the organizational and sociological as well as psychological aspects of AI. Next, the role that visualizations play in increasing trust in AI system will be illuminated. This includes questions such as: Which mechanisms exist to make AI systems trustworthy? How can interactive visualizations contribute? Under which circumstances are interactive visualizations the decisive factor for enabling responsible AI? And what are the research challenges that still have to be solved – in the area of machine learning or interactive visualization – to leverage this potential in real world applications?

The planned outcome of this seminar is a better understanding of how interactive visualizations can help to foster trust in artificial intelligence systems by making them more understandable and responsible. This should encourage innovative research and help to start joint research projects tackling the issue. Concrete outcomes may be a position paper describing the research challenges identified in the seminar or a special issue featuring interactive visualizations for fostering trust in AI.

Copyright Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke

Teilnehmer
  • Gennady Andrienko (Fraunhofer IAIS - Sankt Augustin, DE) [dblp]
  • Natalia V. Andrienko (Fraunhofer IAIS - Sankt Augustin, DE) [dblp]
  • Emma Beauxis-Aussalet (VU University Amsterdam, NL) [dblp]
  • Michael Behrisch (Utrecht University, NL) [dblp]
  • Rita Borgo (King's College London, GB) [dblp]
  • Simone Braun (Hochschule Offenburg, DE)
  • Peer-Timo Bremer (LLNL - Livermore, US) [dblp]
  • Polo Chau (Georgia Institute of Technology - Atlanta, US) [dblp]
  • David S. Ebert (University of Oklahoma - Norman, US) [dblp]
  • Mennatallah El-Assady (ETH Zürich, CH) [dblp]
  • Alex Endert (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Brian D. Fisher (Simon Fraser University - Surrey, CA) [dblp]
  • Barbara Hammer (Universität Bielefeld, DE) [dblp]
  • Daniel A. Keim (Universität Konstanz, DE) [dblp]
  • Steffen Koch (Universität Stuttgart, DE) [dblp]
  • Jörn Kohlhammer (Fraunhofer IGD - Darmstadt, DE) [dblp]
  • Rafael M. Martins (Linnaeus University - Växjö, SE) [dblp]
  • Laura Matzen (Sandia National Labs - Albuquerque, US)
  • Daniela Oelke (Hochschule Offenburg, DE) [dblp]
  • Jaakko Peltonen (Tampere University of Technology, FI) [dblp]
  • Adam Perer (Carnegie Mellon University - Pittsburgh, US) [dblp]
  • Maria Riveiro (Jönköping University, SE) [dblp]
  • Tobias Schreck (TU Graz, AT) [dblp]
  • Harald Schupp (Universität Konstanz, DE)
  • Hendrik Strobelt (MIT-IBM Watson AI Lab - Cambridge, US) [dblp]
  • Alexandru C. Telea (Utrecht University, NL) [dblp]
  • Stef Van den Elzen (TU Eindhoven, NL) [dblp]
  • Michel Verleysen (University of Louvain, BE) [dblp]
  • Emily Wall (Emory University - Atlanta, US) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 20382: Interactive Visualization for Fostering Trust in AI (2020-09-13 - 2020-09-16) (Details)

Klassifikation
  • Artificial Intelligence
  • Computers and Society
  • Human-Computer Interaction

Schlagworte
  • Interactive visualization
  • artificial intelligence
  • machine learning
  • trust
  • responsibility
  • understandability
  • accountability
  • explainability
  • fairness