https://www.dagstuhl.de/21231

06. – 10. Juni 2021, Dagstuhl-Seminar 21231

Transparency by Design

Organisatoren

Casey Dugan (IBM Research – Cambridge, US)
Judy Kay (The University of Sydney, AU)
Tsvi Kuflik (Haifa University, IL)
Michael Rovatsos (University of Edinburgh, GB)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team

Dokumente

Dagstuhl Report, Volume 11, Issue 5 Dagstuhl Report
Motivationstext
Teilnehmerliste
Gemeinsame Dokumente

Summary

As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around the ethical implications of AI has gained significant momentum over the last few years. Much of this debate has focused on fairness, accountability, transparency and ethics, giving rise to "Fairness, Accountability and Transparency" (FAT or FAccT) being commonly used to capture this complex of properties as key elements to ethical AI.

However, the notion of transparency - closely linked to terms like explainability, accountability, and interpretability - has not yet been given a holistic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies, there is a gap in understanding around how to create systems with the required transparency, from demands on capturing their transparency requirements all the way through to concrete design and implementation methodologies. When it comes to, for example, demonstrating that a system is fair or accountable, we lack usable theoretical frameworks for transparency. More generally, there are no general practical methodologies for the design of transparent systems.

The purpose of this Dagstuhl Seminar was to initiate a debate around theoretical foundations and practical methodologies with the overall aim of laying the foundations for a "Transparency by Design" framework, i.e. a framework for systems development that integrates transparency in all stages of the software development process.

To address this challenge, we brought together researchers with expertise in Artificial Intelligence, Human-Computer Interaction, and Software Engineering, but also considered it essential to invite experts from the humanities, law and social sciences, which would bring an interdisciplinary dimension to the seminar to investigate the cognitive, social, and legal aspects of transparency.

As a consequence of the Covid-19 pandemic, the seminar had to be carried out in a virtual, online format. To accommodate the time zones of participants from different parts of the world, two three-hour sessions were scheduled each day, with participant groups of roughly equal size re-shuffled each day to provide every attendee with opportunities to interact with all other participants whenever time difference between their locations made this possible in principle. Each session consisted of plenary talks and discussion as well as work in small groups, with discussions and outcomes captured in shared documents that were edited jointly by the groups attending different sessions each day.

The seminar was planned to gradually progress from building a shared understanding of the problem space among participants on the first day, to mapping out the state of the art and identifying gaps in their respective areas of expertise on the second day and third day.

To do this, the groups identified questions that stakeholders in different domains may need to be able to answer in a transparent systems, where we relied on participants to choose domains they are familiar with and consider important. To identify the state of the art in these areas, the group sessions on the second and third days were devoted to mapping out the current practice and research, identifying gaps that need to be addressed.

The two sessions on each day considered these in terms of four aspects: data collection techniques, software development methodologies, AI techniques and user interfaces.

Finally, the last day was dedicated to consolidating the results towards creating a framework for designing transparent systems. This began with each of the parallel groups considering different aspects: Motivating why transparency is important; challenges posed by current algorithmic systems; transparency-enhancing technologies; a transparency by design methodology; and, finally, the road ahead.

The work that began with the small group discussions and summaries continued with follow up meetings to continue the work of each group. The organisers have led the work to integrate all of these into an ongoing effort after the seminar, aiming to create a future joint publication.

Summary text license
  Creative Commons BY 4.0
  Judy Kay, Tsvi Kuflik, and Michael Rovatsos

Classification

  • Artificial Intelligence / Robotics
  • Society / Human-computer Interaction
  • Software Engineering

Keywords

  • Algorithmic transparency
  • Fairness
  • Accountability
  • AI ethics
  • Computers and society
  • Artificial Intelligence
  • Software Engineering
  • Human-Computer Interaction
  • Machine learning
  • Software methodologies
  • User modelling
  • Intelligent user interfaces

Dokumentation

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.

 

Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.

Publikationen

Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.