TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 20271

Transparency by Design Postponed

( 28. Jun – 03. Jul, 2020 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/20271

Ersetzt durch
Dagstuhl-Seminar 21231: Transparency by Design (2021-06-06 - 2021-06-10) (Details)

Organisatoren

Kontakt

Motivation

As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around ethical implications of AI has gained significant momentum over the last few years. And much of this debate has focused on fairness, accountability, and transparency (giving rise to “Fairness, Accountability, and Transparency” (FAT) being commonly used to capture this complex of properties) as key elements to ethical AI. However, the notion of transparency – closely linked to explainability and interpretability – has largely eluded systematic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies when it comes to, for example, demonstrating that a system is fair or accountable, neither are concrete theoretical frameworks for transparency defined, nor are practical general methodologies proposed to embed transparency in the design of these systems.

The purpose of this Dagstuhl Seminar will be to initiate a debate around these theoretical foundations and practical methodologies with the overall aim of laying the foundations for a “transparency by design” framework – a framework for systems development methodology that integrates transparency into all stages of the software development process. Addressing this challenge will involve bringing together researchers from Artificial Intelligence, Human-Computer Interaction, and Software Engineering, as well as ethics specialists from the humanities and social sciences. The seminar will explore questions such as:

  • What sorts of explanations are users looking for (or may be helpful for them) in a certain type of system, and how should these be generated and presented to them?
  • Can software code be designed or augmented to provide information about internal processing without revealing commercially sensitive information?
  • How should agile software development methodologies be extended to maketransparency to relevant stakeholders a priority without adding complexity to the process?
  • How can properties of AI systems that are of interest be expressed in languages that lend themselves to formal verification or quantitative analysis?
  • What kinds of interfaces can support people in scrutinising the operation of AI algorithms and tracking the ways this informs decision making?
  • How can traditional software testing methodologies be extended to validate “ethical” properties of AI systems stakeholders are interested in?

Discussion of questions like these will help refine our understanding of types of transparency that can be provided, and participants will work towards concrete methodological guidelines for delivering such transparency. The seminar will explore the trade-offs involved, and the limitations (and, in fact, the potential downsides) to achieving full transparency and what options to make available to users when transparency cannot be supported in ways that make sense to the user and engender trust.

The format of the first three days of the seminar will be based on a mix of presentations from experts in different areas in response to a set of challenge scenarios that will be shared with participants prior to the event and group discussions around the problems and possible solutions that arise from different approaches and perspectives. The fourth day will be devoted to a design workshop to synthesise insights into a framework, with the latter part of this workshop and the final day being used to start work on a joint white paper on “transparency by design”.

Copyright Casey Dugan, Judy Kay, Tsvi Kuflik, and Michael Rovatsos

Teilnehmer
  • Casey Dugan (IBM Research - Cambridge, US) [dblp]
  • Judy Kay (The University of Sydney, AU) [dblp]
  • Tsvi Kuflik (Haifa University, IL) [dblp]
  • Michael Rovatsos (University of Edinburgh, GB) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • society / human-computer interaction
  • software engineering

Schlagworte
  • Algorithmic transparency
  • fairness
  • accountability
  • AI ethics
  • computers and society
  • Artificial Intelligence
  • Software Engineering
  • Human-Computer Interaction
  • machine learning
  • software methodologies
  • user modelling
  • intelligent user interfaces