28. Juni – 03. Juli 2020, Dagstuhl-Seminar 20271

RESCHEDULED Transparency by Design

Due to the Covid-19 pandemic, this seminar was rescheduled to 06. – 10. Juni 2021Seminar 21231.


Casey Dugan (IBM Research – Cambridge, US)
Judy Kay (The University of Sydney, AU)
Tsvi Kuflik (Haifa University, IL)
Michael Rovatsos (University of Edinburgh, GB)

Auskunft zu diesem Dagstuhl-Seminar erteilt

Dagstuhl Service Team


As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around ethical implications of AI has gained significant momentum over the last few years. And much of this debate has focused on fairness, accountability, and transparency (giving rise to “Fairness, Accountability, and Transparency” (FAT) being commonly used to capture this complex of properties) as key elements to ethical AI. However, the notion of transparency – closely linked to explainability and interpretability – has largely eluded systematic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies when it comes to, for example, demonstrating that a system is fair or accountable, neither are concrete theoretical frameworks for transparency defined, nor are practical general methodologies proposed to embed transparency in the design of these systems.

The purpose of this Dagstuhl Seminar will be to initiate a debate around these theoretical foundations and practical methodologies with the overall aim of laying the foundations for a “transparency by design” framework – a framework for systems development methodology that integrates transparency into all stages of the software development process. Addressing this challenge will involve bringing together researchers from Artificial Intelligence, Human-Computer Interaction, and Software Engineering, as well as ethics specialists from the humanities and social sciences. The seminar will explore questions such as:

  • What sorts of explanations are users looking for (or may be helpful for them) in a certain type of system, and how should these be generated and presented to them?
  • Can software code be designed or augmented to provide information about internal processing without revealing commercially sensitive information?
  • How should agile software development methodologies be extended to maketransparency to relevant stakeholders a priority without adding complexity to the process?
  • How can properties of AI systems that are of interest be expressed in languages that lend themselves to formal verification or quantitative analysis?
  • What kinds of interfaces can support people in scrutinising the operation of AI algorithms and tracking the ways this informs decision making?
  • How can traditional software testing methodologies be extended to validate “ethical” properties of AI systems stakeholders are interested in?

Discussion of questions like these will help refine our understanding of types of transparency that can be provided, and participants will work towards concrete methodological guidelines for delivering such transparency. The seminar will explore the trade-offs involved, and the limitations (and, in fact, the potential downsides) to achieving full transparency and what options to make available to users when transparency cannot be supported in ways that make sense to the user and engender trust.

The format of the first three days of the seminar will be based on a mix of presentations from experts in different areas in response to a set of challenge scenarios that will be shared with participants prior to the event and group discussions around the problems and possible solutions that arise from different approaches and perspectives. The fourth day will be devoted to a design workshop to synthesise insights into a framework, with the latter part of this workshop and the final day being used to start work on a joint white paper on “transparency by design”.

Motivation text license
  Creative Commons BY 3.0 DE
  Casey Dugan, Judy Kay, Tsvi Kuflik, and Michael Rovatsos

Related Dagstuhl-Seminar


  • Artificial Intelligence / Robotics
  • Society / Human-computer Interaction
  • Software Engineering


  • Algorithmic transparency
  • Fairness
  • Accountability
  • AI ethics
  • Computers and society
  • Artificial Intelligence
  • Software Engineering
  • Human-Computer Interaction
  • Machine learning
  • Software methodologies
  • User modelling
  • Intelligent user interfaces


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.