Accountability in the context of software is an emerging area that has attracted interest in disparate fields from computing and information science to philosophy to political science to law and policy studies. Presently, there is an increasing number of autonomous agents shifting into safety-critical aspects of society and our everyday lives. Autonomously driven vehicles share the roads with us. Computerized reasoning aids healthcare providers in disease diagnostics, recruiters in hiring, and even adjudicators in analyzing pretrial flight risks. While a common goal of these autonomous agents is to assist and improve human lives, it is a harsh reality that the opposite occurs far too frequently. Car accidents involving autonomously driven vehicles have been fatal, and bias-vulnerable autonomous decision-making has led to discriminatory assessment and abuse of individuals. As a result, accountability is started to be investigated in areas including AI, machine learning, control, systems development, data management, software engineering, programming languages, human factors/human-computer interaction, and formal verification communities.
Understanding accountability and the place of formal computing tools in assigning it has been recognized by funding agencies as an important research topic. Significant reforms are currently occurring in the law to the liability of autonomous systems, such as the proposed EU Artificial Intelligence Act; reforms to the EU Machinery Directive and Product Liability Directive; standardization processes around software and accountability at the ISO, IEEE, CEN/CENELEC, US NIST, and ETSI, among others; a range of proposals around reforms to intermediary liability and recommender systems in the US Congress and proposals from the White House Office of Science and Technology Policy, among many other ongoing changes and revisions to existing policies and recommendations for new policies around the world.
The goal of this Dagstuhl Seminar is to integrate the perspectives of many different communities, convening a disparate group of experts to discuss open problems and potential future directions across a variety of domains with a focus on accountability. We have highlighted three main areas for this discussion:
- Justifying Assurance: Opportunities and Limits of Formal Tools for Accountability in Sociotechnical Context
- The Role(s) of Computing and Infrastructures in Accountability for Software
- New Formal Specification Languages and Modeling Techniques for Accountability
In addition to scientific contributions, we expect that this seminar will serve as a forum for cross-disciplinary discussions that would not otherwise take place. The seminar will forge new conversations between the communities with a focus on socio-technical research and formal methods community. Another goal of the seminar is to enable synergies between different fields attacking the problem of accountability to discuss and improve upon the state of the art and practice. As a result of the seminar, we plan to produce a report, that can act as a body of knowledge for researchers joining this newly forming community.
- Computers and Society
- Multiagent Systems
- Other Computer Science
- autonomous systems
- decision making