Dagstuhl Seminar 24121
Trustworthiness and Responsibility in AI – Causality, Learning, and Verification
( Mar 17 – Mar 22, 2024 )
Permalink
Organizers
- Vaishak Belle (University of Edinburgh, GB)
- Hana Chockler (King's College London, GB)
- Sriraam Natarajan (University of Texas at Dallas - Richardson, US)
- Shannon Vallor (University of Edinburgh, GB)
- Kush R. Varshney (IBM Research - Yorktown Heights, US)
- Joost Vennekens (KU Leuven, BE)
Contact
- Marsha Kleinbauer (for scientific matters)
- Simone Schilke (for administrative matters)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
How can we trust autonomous computer-based systems? Since such systems are increasingly being deployed in safety-critical environments while interoperating with humans, this question is rapidly becoming more important. This Dagstuhl Seminar aims to address this question by bringing together an interdisciplinary group of researchers from Artificial Intelligence (AI), Machine Learning (ML), Robotics (ROB), hardware and software verification (VER), Software Engineering (SE), and Social Sciences (SS); who can provide different and complementary perspectives on responsibility and correctness regarding the design of algorithms, interfaces, and development methodologies in AI.
The purpose of the seminar will be to initiate a debate around both theoretical foundations and practical methodologies for a "Trustworthiness & Responsibility in AI" framework that integrates quantifiable responsibility and verifiable correctness into all stages of the software engineering process. Such a framework will allow governance and regulatory practises to be viewed not only as rules and regulations imposed from afar, but instead as an integrative process of dialogue and discovery to understand why an autonomous system might fail and how to help designers and regulators address these through proactive governance.
In particular, we will consider how to reason about responsibility, blame, and causal factors affecting the trustworthiness of the system. More practically, we will also ask what tools we can provide to regulators, verification and validation professionals, and system designers to help them clarify the intent and content of regulations down to a machine interpretable form. While existing regulations are necessarily vague, and dependent on human interpretation, we will ask:
How should they now be made precise and quantifiable? What is lost in the process of quantification? How do we address factors that are qualitative in nature, and integrate such concerns in an engineering regime? In addressing these questions, the seminar will benefit from extensive discussions between AI, ML, ROB, SE, and SS researchers who have experience with ethical, societal, and legal aspects of AI, complex AI systems, software engineering for AI systems, and causal analysis of counterexamples and software faults.
As a main outcome of this Dagstuhl Seminar we plan to create a blueprint(s) of a "Trustworthiness & Responsibility in AI" framework(s), grounded in causality and verification. This will be immediately useful as a guideline and can form the foundation for a white paper. Specifically, we will seek to produce a report detailing what we consider gaps in formal research around responsibility and which will be useful for future research for the preparation of new experiments, papers, and project proposals that help close these gaps. We also hope that this initial material will lead to a proposal for an open workshop at a major international conference that could be organised by participants of the seminar, and the organisers will endeavour to produce, in collaboration with other interested participants a magazine-style article (for AI Magazine, IEEE Intelligent Systems, or similar outlets) summarising the results of the workshop and giving an overview of the research challenges that came out of it.
- Nadisha-Marie Aliman (Utrecht University, NL)
- Emma Beauxis-Aussalet (VU Amsterdam, NL) [dblp]
- Sander Beckers (University of Amsterdam, NL)
- Vaishak Belle (University of Edinburgh, GB) [dblp]
- Jan M. Broersen (Utrecht University, NL) [dblp]
- Georgiana Caltais (University of Twente - Enschede, NL)
- Hana Chockler (King's College London, GB) [dblp]
- Jens Claßen (Roskilde University, DK) [dblp]
- Sjur K. Dyrkolbotn (West. Norway Univ. of Applied Sciences - Bergen, NO) [dblp]
- Yanai Elazar (AI2 - Seattle, US)
- Esra Erdem (Sabanci University - Istanbul, TR) [dblp]
- Michael Fisher (University of Manchester, GB) [dblp]
- Sarah Alice Gaggl (TU Dresden, DE) [dblp]
- Leilani H. Gilpin (University of California - Santa Cruz, US)
- Gregor Goessler (INRIA - Grenoble, FR)
- Joseph Y. Halpern (Cornell University - Ithaca, US) [dblp]
- Till Hofmann (RWTH Aachen University, DE) [dblp]
- David Jensen (University of Massachusetts - Amherst, US)
- Leon Kester (TNO Netherlands - The Hague, NL)
- Ekaterina Komendantskaya (Heriot-Watt University - Edinburgh, GB) [dblp]
- Stefan Leue (Universität Konstanz, DE) [dblp]
- Joshua Loftus (London School of Economics and Political Science, GB)
- Mohammad Reza Mousavi (King's College London, GB) [dblp]
- Giuseppe Primiero (University of Milan, IT)
- Ajitha Rajan (University of Edinburgh, GB) [dblp]
- Subramanian Ramamoorthy (University of Edinburgh, GB) [dblp]
- Kilian Rückschloß (LMU München, DE)
- Judith Simon (Universität Hamburg, DE) [dblp]
- Luke Stark (University of Western Ontario - London , CA)
- Daniel Susser (Cornell University - Ithaca, US)
- Shannon Vallor (University of Edinburgh, GB) [dblp]
- Kush R. Varshney (IBM Research - Yorktown Heights, US)
- Joost Vennekens (KU Leuven, BE)
- Felix Weitkämper (LMU München, DE) [dblp]
Classification
- Artificial Intelligence
- Machine Learning
Keywords
- artificial intelligence
- machine learning
- causality
- responsible AI
- verification