https://www.dagstuhl.de/20271

June 28 – July 3 , 2020, Dagstuhl Seminar 20271

Transparency by Design

Organizers

Casey Dugan (IBM TJ Watson Research Center – Cambridge, US)
Judy Kay (The University of Sydney, AU)
Tsvi Kuflik (Haifa University, IL)
Michael Rovatsos (University of Edinburgh, GB)

For support, please contact

Annette Beyer for administrative matters

Michael Gerke for scientific matters

Motivation

As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around ethical implications of AI has gained significant momentum over the last few years. And much of this debate has focused on fairness, accountability, and transparency (giving rise to “Fairness, Accountability, and Transparency” (FAT) being commonly used to capture this complex of properties) as key elements to ethical AI. However, the notion of transparency – closely linked to explainability and interpretability – has largely eluded systematic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies when it comes to, for example, demonstrating that a system is fair or accountable, neither are concrete theoretical frameworks for transparency defined, nor are practical general methodologies proposed to embed transparency in the design of these systems.

The purpose of this Dagstuhl Seminar will be to initiate a debate around these theoretical foundations and practical methodologies with the overall aim of laying the foundations for a “transparency by design” framework – a framework for systems development methodology that integrates transparency into all stages of the software development process. Addressing this challenge will involve bringing together researchers from Artificial Intelligence, Human-Computer Interaction, and Software Engineering, as well as ethics specialists from the humanities and social sciences. The seminar will explore questions such as:

  • What sorts of explanations are users looking for (or may be helpful for them) in a certain type of system, and how should these be generated and presented to them?
  • Can software code be designed or augmented to provide information about internal processing without revealing commercially sensitive information?
  • How should agile software development methodologies be extended to maketransparency to relevant stakeholders a priority without adding complexity to the process?
  • How can properties of AI systems that are of interest be expressed in languages that lend themselves to formal verification or quantitative analysis?
  • What kinds of interfaces can support people in scrutinising the operation of AI algorithms and tracking the ways this informs decision making?
  • How can traditional software testing methodologies be extended to validate “ethical” properties of AI systems stakeholders are interested in?

Discussion of questions like these will help refine our understanding of types of transparency that can be provided, and participants will work towards concrete methodological guidelines for delivering such transparency. The seminar will explore the trade-offs involved, and the limitations (and, in fact, the potential downsides) to achieving full transparency and what options to make available to users when transparency cannot be supported in ways that make sense to the user and engender trust.

The format of the first three days of the seminar will be based on a mix of presentations from experts in different areas in response to a set of challenge scenarios that will be shared with participants prior to the event and group discussions around the problems and possible solutions that arise from different approaches and perspectives. The fourth day will be devoted to a design workshop to synthesise insights into a framework, with the latter part of this workshop and the final day being used to start work on a joint white paper on “transparency by design”.

License
  Creative Commons BY 3.0 DE
  Casey Dugan, Judy Kay, Tsvi Kuflik, and Michael Rovatsos

Classification

  • Artificial Intelligence / Robotics
  • Society / Human-computer Interaction
  • Software Engineering

Keywords

  • Algorithmic transparency
  • Fairness
  • Accountability
  • AI ethics
  • Computers and society
  • Artificial Intelligence
  • Software Engineering
  • Human-Computer Interaction
  • Machine learning
  • Software methodologies
  • User modelling
  • Intelligent user interfaces

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.

NSF young researcher support