https://www.dagstuhl.de/21381

September 19 – 24 , 2021, Dagstuhl Seminar 21381

Conversational Agent as Trustworthy Autonomous System (Trust-CA)

Organizers

Asbjorn Folstad (SINTEF – Oslo, NO)
Jonathan Grudin (Microsoft Research – Redmond, US)
Effie Lai-Chong Law (University of Leicester, GB)
Björn Schuller (Universität Augsburg, DE)

For support, please contact

Jutka Gasiorowski for administrative matters

Shida Kunz for scientific matters

Documents

Dagstuhl Seminar Schedule (Upload here)

(Use personal credentials as created in DOOR to log in)

Motivation

Conversational agent (CA) is a burgeoning computing technology that enables users to access services and information by interacting with computers through natural language in a way that emulates human-human dialogue, be it textual, vocal, visual or gestural(1). CAs may also be referred to as, for example, chatbots, virtual agents, conversational computing, or dialogue systems, and encompass services such as voice-based assistants, open domain agents for social chatter, agents for prolonged training, coaching or companionship, and chatbots for customer interactions.

Recent years have witnessed a surge of CA usage in many sectors. It is attributable to the recent advances in AI/Machine Learning (ML) technologies. In addition to AI/ML, other fields, which are critical for the development of CA, include Human-Computer Interaction (HCI), design, linguistics, communication science, philosophy, psychology, and sociology. Research on CA is inherently multidisciplinary and multifaceted. Hence, it is not surprising that several strands of research activities on CA have been launched by different communities with varied foci. More conversations among the conversation-oriented research communities should take place to enable the consolidation of diverse understandings of complex issues pertaining to CA. Among them, the trustworthiness of and trust in CA should be high in the research agenda. Our seminar Trust-CA aims to enable such conversations.

CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI)(2) . This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.

Autonomous systems support a range of autonomy levels, recognising the need of human-in-the-loop (HiL)(3) to execute specific tasks. To ensure seamless and robust human-system integration, a trust-based relation must be built(4). There have been attempts to develop methodological frameworks for designing HiL solutions for autonomous systems. But more needs to be done.

Overall, conversational agents as trustworthy autonomous systems face several key challenges:

  • How do we develop trustworthy conversational agents?
  • How do we build people’s trust in them?
  • How do we optimise human and conversational agent collaboration?

The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.

  1. Følstad, A., & Brandtzaeg, P. B. (2020). Users' experiences with chatbots: findings from a questionnaire study. Quality &User Experience, 5.
  2. Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
  3. Farooq, U., & Grudin, J. (2016). Human-computer integration. interactions, 23(6), 26-32.
  4. Glikson, E., & Woolley, A. W. (2020). Human trust in Artificial Intelligence: Review of empirical research. Academy of Management Annals.

Motivation text license
  Creative Commons BY 3.0 DE
  Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller

Classification

  • Artificial Intelligence
  • Human-Computer Interaction
  • Robotics

Keywords

  • Conversational agents
  • Trust
  • Trustworthiness
  • Human-chatbot collaboration
  • Voice emotion

Documentation

In the series Dagstuhl Reports each Dagstuhl Seminar and Dagstuhl Perspectives Workshop is documented. The seminar organizers, in cooperation with the collector, prepare a report that includes contributions from the participants' talks together with a summary of the seminar.

 

Download overview leaflet (PDF).

Publications

Furthermore, a comprehensive peer-reviewed collection of research papers can be published in the series Dagstuhl Follow-Ups.

Dagstuhl's Impact

Please inform us when a publication was published as a result from your seminar. These publications are listed in the category Dagstuhl's Impact and are presented on a special shelf on the ground floor of the library.