TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 20382

Interactive Visualization for Fostering Trust in AI

( 13. Sep – 16. Sep, 2020 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/20382

Organisatoren

Kontakt



Motivation

Artificial intelligence, and in particular machine learning algorithms, are of increasing importance in many application areas. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. All major industrial players, including Google, Microsoft, Apple, and SAP, have become aware of this gap and recently published some form of Guidelines for the Use of AI. Interactive visualization is one of the technologies that has strong potential to increase trust into AI systems.

In our seminar, we want to discuss the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI. As a first step, we will identify the factors that help to increase the trust of users in AI systems. This involves a discussion of their understandability (interpretable, explainable, intelligible, etc.) as well as their responsibility (accountable, transparent, fair, unbiased, etc.) since these factors drive the design and development of interactive interfaces and AI models to ensure trust. Next, the role that visualizations play in increasing trust in AI system will be illuminated. This includes questions such as: Which mechanisms exist to make AI systems trustworthy? How can interactive visualizations contribute? Under which circumstances are interactive visualizations the decisive factor for enabling responsible AI? And what are the research challenges that still have to be solved – in the area of machine learning or interactive visualization – to leverage this potential in real world applications?

The planned outcome of this seminar is a better understanding of how interactive visualizations can help to foster trust in artificial intelligence systems by making them more understandable and responsible. This should encourage innovative research and help to start joint research projects tackling the issue. Concrete outcomes may be a position paper describing the research challenges identified in the seminar or a special issue featuring interactive visualizations for fostering trust in AI.

Copyright Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke

Summary

Artificial Intelligence (AI) and other computational processes continue to influence decisions across a wide range of applications including healthcare decisions, vehicle navigation, data science, and others. This Dagstuhl seminar reflected on some of the challenges inherent in the goal of increasing the interpretability of these systems, and when applicable, increase the trust people put into them to make decisions. The seminar participants discussed the complexity of trust itself, and how the concept is multi-faceted, and likely outside of researchers in technology and computer science to fully define. We discussed an inter-disciplinary research agenda, as well as a manifesto that should help frame this direction going forward.

Copyright Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke

Teilnehmer
Vor Ort
  • Michael Behrisch (Utrecht University, NL) [dblp]
  • Rita Borgo (King's College London, GB) [dblp]
  • Mennatallah El-Assady (Universität Konstanz, DE) [dblp]
  • Daniel A. Keim (Universität Konstanz, DE) [dblp]
  • Jörn Kohlhammer (Fraunhofer IGD - Darmstadt, DE) [dblp]
  • Daniela Oelke (Hochschule Offenburg, DE) [dblp]
  • Maria Riveiro (Jönköping University, SE) [dblp]
  • Tobias Schreck (TU Graz, AT) [dblp]
  • Jarke J. van Wijk (TU Eindhoven, NL) [dblp]
Remote:
  • Emma Beauxis-Aussalet (VU University Amsterdam, NL) [dblp]
  • David S. Ebert (Purdue University - West Lafayette, US) [dblp]
  • Jaakko Peltonen (Tampere University of Technology, FI) [dblp]
  • Hendrik Strobelt (MIT-IBM Watson AI Lab - Cambridge, US) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 22351: Interactive Visualization for Fostering Trust in ML (2022-08-28 - 2022-09-02) (Details)

Klassifikation
  • Graphics
  • Human-Computer Interaction
  • Machine Learning

Schlagworte
  • Interactive visualization
  • machine learning
  • trust
  • responsibility
  • understandability