- Reflections on the EU's AI Act and How we Could Make it Even Better : article - Haataja, Meeri; Bryson, Joanna J. - Competition Policy International, 2022. - 15 pp..
- Transnational digital governance and its impact on artificial intelligence : chapter in "The Oxford Handbook of AI Governance", 9780197579329 (preprint version) - Dempsey, Mark Matthew; MacBride, Keegan; Haataja, Meeri; Bryson, Joanna J. - Oxford : Oxford University Press, 2022. - 21 pp..
- What costs should we expect from the EU’s AI Act? : article - Haataja, Meeri; Bryson, Joanna J. - SocArxiv, 2022. - 6 pp..
As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around ethical implications of AI has gained significant momentum over the last few years. And much of this debate has focused on fairness, accountability, and transparency (giving rise to “Fairness, Accountability, and Transparency” (FAT) being commonly used to capture this complex of properties) as key elements to ethical AI. However, the notion of transparency – closely linked to explainability and interpretability – has largely eluded systematic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies when it comes to, for example, demonstrating that a system is fair or accountable, neither are concrete theoretical frameworks for transparency defined, nor are practical general methodologies proposed to embed transparency in the design of these systems.
The purpose of this Dagstuhl Seminar will be to initiate a debate around these theoretical foundations and practical methodologies with the overall aim of laying the foundations for a “transparency by design” framework – a framework for systems development methodology that integrates transparency into all stages of the software development process. Addressing this challenge will involve bringing together researchers from Artificial Intelligence, Human-Computer Interaction, and Software Engineering, as well as ethics specialists from the humanities and social sciences. The seminar will explore questions such as:
- What sorts of explanations are users looking for (or may be helpful for them) in a certain type of system, and how should these be generated and presented to them?
- Can software code be designed or augmented to provide information about internal processing without revealing commercially sensitive information?
- How should agile software development methodologies be extended to maketransparency to relevant stakeholders a priority without adding complexity to the process?
- How can properties of AI systems that are of interest be expressed in languages that lend themselves to formal verification or quantitative analysis?
- What kinds of interfaces can support people in scrutinising the operation of AI algorithms and tracking the ways this informs decision making?
- How can traditional software testing methodologies be extended to validate “ethical” properties of AI systems stakeholders are interested in?
Discussion of questions like these will help refine our understanding of types of transparency that can be provided, and participants will work towards concrete methodological guidelines for delivering such transparency. The seminar will explore the trade-offs involved, and the limitations (and, in fact, the potential downsides) to achieving full transparency and what options to make available to users when transparency cannot be supported in ways that make sense to the user and engender trust.
The format of the first three days of the seminar will be based on a mix of presentations from experts in different areas in response to a set of challenge scenarios that will be shared with participants prior to the event and group discussions around the problems and possible solutions that arise from different approaches and perspectives. The fourth day will be devoted to a design workshop to synthesise insights into a framework, with the latter part of this workshop and the final day being used to start work on a joint white paper on “transparency by design”.
As AI technologies are witnessing impressive advances and becoming increasingly widely adopted in real-world domains, the debate around the ethical implications of AI has gained significant momentum over the last few years. Much of this debate has focused on fairness, accountability, transparency and ethics, giving rise to "Fairness, Accountability and Transparency" (FAT or FAccT) being commonly used to capture this complex of properties as key elements to ethical AI.
However, the notion of transparency - closely linked to terms like explainability, accountability, and interpretability - has not yet been given a holistic treatment within computer science. Despite the fact that it is a prerequisite to instilling trust in AI technologies, there is a gap in understanding around how to create systems with the required transparency, from demands on capturing their transparency requirements all the way through to concrete design and implementation methodologies. When it comes to, for example, demonstrating that a system is fair or accountable, we lack usable theoretical frameworks for transparency. More generally, there are no general practical methodologies for the design of transparent systems.
The purpose of this Dagstuhl Seminar was to initiate a debate around theoretical foundations and practical methodologies with the overall aim of laying the foundations for a "Transparency by Design" framework, i.e. a framework for systems development that integrates transparency in all stages of the software development process.
To address this challenge, we brought together researchers with expertise in Artificial Intelligence, Human-Computer Interaction, and Software Engineering, but also considered it essential to invite experts from the humanities, law and social sciences, which would bring an interdisciplinary dimension to the seminar to investigate the cognitive, social, and legal aspects of transparency.
As a consequence of the Covid-19 pandemic, the seminar had to be carried out in a virtual, online format. To accommodate the time zones of participants from different parts of the world, two three-hour sessions were scheduled each day, with participant groups of roughly equal size re-shuffled each day to provide every attendee with opportunities to interact with all other participants whenever time difference between their locations made this possible in principle. Each session consisted of plenary talks and discussion as well as work in small groups, with discussions and outcomes captured in shared documents that were edited jointly by the groups attending different sessions each day.
The seminar was planned to gradually progress from building a shared understanding of the problem space among participants on the first day, to mapping out the state of the art and identifying gaps in their respective areas of expertise on the second day and third day.
To do this, the groups identified questions that stakeholders in different domains may need to be able to answer in a transparent systems, where we relied on participants to choose domains they are familiar with and consider important. To identify the state of the art in these areas, the group sessions on the second and third days were devoted to mapping out the current practice and research, identifying gaps that need to be addressed.
The two sessions on each day considered these in terms of four aspects: data collection techniques, software development methodologies, AI techniques and user interfaces.
Finally, the last day was dedicated to consolidating the results towards creating a framework for designing transparent systems. This began with each of the parallel groups considering different aspects: Motivating why transparency is important; challenges posed by current algorithmic systems; transparency-enhancing technologies; a transparency by design methodology; and, finally, the road ahead.
The work that began with the small group discussions and summaries continued with follow up meetings to continue the work of each group. The organisers have led the work to integrate all of these into an ongoing effort after the seminar, aiming to create a future joint publication.
- Elisabeth André (Universität Augsburg, DE) [dblp]
- Bettina Berendt (TU Berlin, DE) [dblp]
- Nehal Bhuta (University of Edinburgh, GB)
- Maria Bielikova (KInIT - Bratislava, SK) [dblp]
- Veronika Bogina (University of Haifa, IL) [dblp]
- Joanna J. Bryson (Hertie School of Governance - Berlin, DE) [dblp]
- Robin Burke (University of Colorado - Boulder, US)
- Aylin Caliskan (George Washington University - Washington, DC, US) [dblp]
- Carlos Castillo (UPF - Barcelona, ES) [dblp]
- Ewa Cepil (UPJPII - Krakow, PL)
- Cristina Conati (University of British Columbia - Vancouver, CA) [dblp]
- Aviva de Groot (Tilburg University, NL)
- Gianluca Demartini (The University of Queensland - Brisbane, AU) [dblp]
- Virginia Dignum (University of Umeå, SE) [dblp]
- Fausto Giunchiglia (University of Trento, IT)
- Riccardo Guidotti (University of Pisa, IT)
- Meeri Haataja (Saidot Ltd - Espoo, FI)
- Judy Kay (The University of Sydney, AU) [dblp]
- Styliani Kleanthous (Open University of Cyprus - Nicosia, CY)
- Ansgar Koene (EY Global - London, GB)
- Joshua A. Kroll (Naval Postgraduate School - Monterey, US) [dblp]
- Antonio Krüger (DFKI - Saarbrücken, DE) [dblp]
- Tsvi Kuflik (Haifa University, IL) [dblp]
- Bob Kummerfeld (The University of Sydney, AU) [dblp]
- Loizos Michael (Open University of Cyprus - Nicosia, CY) [dblp]
- Antonija Mitrovic (University of Canterbury - Christchurch, NZ) [dblp]
- Kalia Orphanou (Open University of Cyprus - Nicosia, CY)
- Jahna Otterbacher (Open University of Cyprus - Nicosia, CY) [dblp]
- Anna Perini (CIT- FBK - Povo, IT) [dblp]
- Lena Podoletz (University of Edinburgh, GB)
- Iris Reinhartz-Berger (University of Haifa, IL) [dblp]
- Paolo Rosso (Technical University of Valencia, ES) [dblp]
- Michael Rovatsos (University of Edinburgh, GB) [dblp]
- Avital Shulner-Tal (University of Haifa, IL)
- Alison Smith-Renner (University of Maryland - College Park, US)
- Andreas Theodorou (University of Umeå, SE)
- Vincent Wade (Trinity College Dublin, IE)
- Emine Yilmaz (University College London, GB) [dblp]
- artificial intelligence / robotics
- society / human-computer interaction
- software engineering
- Algorithmic transparency
- AI ethics
- computers and society
- Artificial Intelligence
- Software Engineering
- Human-Computer Interaction
- machine learning
- software methodologies
- user modelling
- intelligent user interfaces