19. – 24. September 2021, Dagstuhl-Seminar 21381

Conversational Agent as Trustworthy Autonomous System (Trust-CA)


Asbjorn Folstad (SINTEF – Oslo, NO)
Jonathan Grudin (Microsoft – Redmond, US)
Effie Lai-Chong Law (University of Leicester, GB)
Björn Schuller (Universität Augsburg, DE)

Auskunft zu diesem Dagstuhl-Seminar erteilen

Jutka Gasiorowski zu administrativen Fragen

Shida Kunz zu wissenschaftlichen Fragen

Dagstuhl Reports

Wir bitten die Teilnehmer uns bei der notwendigen Dokumentation zu unterstützen und Abstracts zu ihrem Vortrag, Ergebnisse aus Arbeitsgruppen, etc. zur Veröffentlichung in unserer Serie Dagstuhl Reports einzureichen über unser
Dagstuhl Reports Submission System.


Gemeinsame Dokumente
Dagstuhl-Seminar Wiki

(Zum Einloggen bitte persönliche DOOR-Zugangsdaten verwenden)


Conversational agent (CA) is a burgeoning computing technology that enables users to access services and information by interacting with computers through natural language in a way that emulates human-human dialogue, be it textual, vocal, visual or gestural(1). CAs may also be referred to as, for example, chatbots, virtual agents, conversational computing, or dialogue systems, and encompass services such as voice-based assistants, open domain agents for social chatter, agents for prolonged training, coaching or companionship, and chatbots for customer interactions.

Recent years have witnessed a surge of CA usage in many sectors. It is attributable to the recent advances in AI/Machine Learning (ML) technologies. In addition to AI/ML, other fields, which are critical for the development of CA, include Human-Computer Interaction (HCI), design, linguistics, communication science, philosophy, psychology, and sociology. Research on CA is inherently multidisciplinary and multifaceted. Hence, it is not surprising that several strands of research activities on CA have been launched by different communities with varied foci. More conversations among the conversation-oriented research communities should take place to enable the consolidation of diverse understandings of complex issues pertaining to CA. Among them, the trustworthiness of and trust in CA should be high in the research agenda. Our seminar Trust-CA aims to enable such conversations.

CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI)(2) . This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.

Autonomous systems support a range of autonomy levels, recognising the need of human-in-the-loop (HiL)(3) to execute specific tasks. To ensure seamless and robust human-system integration, a trust-based relation must be built(4). There have been attempts to develop methodological frameworks for designing HiL solutions for autonomous systems. But more needs to be done.

Overall, conversational agents as trustworthy autonomous systems face several key challenges:

  • How do we develop trustworthy conversational agents?
  • How do we build people’s trust in them?
  • How do we optimise human and conversational agent collaboration?

The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.

  1. Følstad, A., & Brandtzaeg, P. B. (2020). Users' experiences with chatbots: findings from a questionnaire study. Quality &User Experience, 5.
  2. Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
  3. Farooq, U., & Grudin, J. (2016). Human-computer integration. interactions, 23(6), 26-32.
  4. Glikson, E., & Woolley, A. W. (2020). Human trust in Artificial Intelligence: Review of empirical research. Academy of Management Annals.

Motivation text license
  Creative Commons BY 3.0 DE
  Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller


  • Artificial Intelligence
  • Human-Computer Interaction
  • Robotics


  • Conversational agents
  • Trust
  • Trustworthiness
  • Human-chatbot collaboration
  • Voice emotion


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.