TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 21381

Conversational Agent as Trustworthy Autonomous System (Trust-CA)

( 19. Sep – 24. Sep, 2021 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/21381

Organisatoren

Kontakt


Motivation

Conversational agent (CA) is a burgeoning computing technology that enables users to access services and information by interacting with computers through natural language in a way that emulates human-human dialogue, be it textual, vocal, visual or gestural(1). CAs may also be referred to as, for example, chatbots, virtual agents, conversational computing, or dialogue systems, and encompass services such as voice-based assistants, open domain agents for social chatter, agents for prolonged training, coaching or companionship, and chatbots for customer interactions.

Recent years have witnessed a surge of CA usage in many sectors. It is attributable to the recent advances in AI/Machine Learning (ML) technologies. In addition to AI/ML, other fields, which are critical for the development of CA, include Human-Computer Interaction (HCI), design, linguistics, communication science, philosophy, psychology, and sociology. Research on CA is inherently multidisciplinary and multifaceted. Hence, it is not surprising that several strands of research activities on CA have been launched by different communities with varied foci. More conversations among the conversation-oriented research communities should take place to enable the consolidation of diverse understandings of complex issues pertaining to CA. Among them, the trustworthiness of and trust in CA should be high in the research agenda. Our seminar Trust-CA aims to enable such conversations.

CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI)(2) . This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.

Autonomous systems support a range of autonomy levels, recognising the need of human-in-the-loop (HiL)(3) to execute specific tasks. To ensure seamless and robust human-system integration, a trust-based relation must be built(4). There have been attempts to develop methodological frameworks for designing HiL solutions for autonomous systems. But more needs to be done.

Overall, conversational agents as trustworthy autonomous systems face several key challenges:

  • How do we develop trustworthy conversational agents?
  • How do we build people’s trust in them?
  • How do we optimise human and conversational agent collaboration?

The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.

  1. Følstad, A., & Brandtzaeg, P. B. (2020). Users' experiences with chatbots: findings from a questionnaire study. Quality &User Experience, 5.
  2. Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
  3. Farooq, U., & Grudin, J. (2016). Human-computer integration. interactions, 23(6), 26-32.
  4. Glikson, E., & Woolley, A. W. (2020). Human trust in Artificial Intelligence: Review of empirical research. Academy of Management Annals.
Copyright Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller

Summary

The overall goal of the Dagstuhl Seminar 21381 "Conversational Agent as Trustworthy Autonomous System" (Trust-CA) was to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agents (CA), to explore challenges in maximising the trustworthiness of and trust in conversational agents as AI-driven autonomous systems - an issue deemed increasingly significant given their widespread uses in every sector of life - and to chart a roadmap for the future conversational agent research. The three main challenges we identified were:

  • How do we develop trustworthy conversational agents?
  • How do we build people's trust in them?
  • How do we optimise human and conversational agent collaboration?

The Seminar Trust-CA took place on 19-24 September 2021 in a hybrid mode. Out of 50 invitees, 19 attended in person and the rest joined online from all over the world, including Brazil, Canada, France, Germany, Greece, Ireland, Netherlands, Norway, Poland, South Korea, Sweden, Switzerland, UK and USA.

The four-day scientific programme started by unpacking the notion of ``trust in conversational agent'' with a panel discussion. Each of the four seminar organisers expressed their views on the notion. Jonathan Grudin presented a list of ten species of trust that can be applied to conversational agents, for instance, "Trust that a CA will correctly interpret my question or request; will deliver relevant, reliable, useful information." Asbjorn Folstad first presented an overview of the six themes derived from a pre-Seminar survey (details are in Overview of Working Groups) and then described his recent work on the effect of human likeness of a conversational agent on trust. Björn Schuller presented factors influencing trust in humans, such as being reliable, ethical, moral and charismatic, and in conversational agents, such as being explainable, interpretable and transparent. He also discussed how to measure trust reliably and the danger of overtrust. Effie Law discussed the notion of trust with reference to multidisciplinary theory of trust (e.g. psychological, social, historical), beyond the use of questionnaires to evaluate trust, and identifying applications where agents are of high practical value. Some attendees commented on the ideas shared, e.g., the elusiveness of trust.

The scientific programme comprised two major parts - Talks and Breakout Groups. There were altogether 20 talks, covering a range of topics (see Abstracts). Nine of the talks were delivered in person and the rest online. There were six Breakout Groups with each discussing one of the six themes: Group 1 - Scope of Trust in CA; Group 2 - Impact of CA; Group 3 - Ethics of CA; Group 4 - AI and Technical Development; Group 5 - Definition, Conceptualisation and Measurement of Trust; Group 6 - Interaction Design of CA. Group 1, 3 and 4 had one team each whereas Group 2, 5 and 6 had two teams each. To ease collaboration, individual teams were either in-person or online (except for Group 4 which was in hybrid mode). Each group had three two-hour working sessions . In the evening, each group reported progress and invited feedback for shaping subsequent sessions.

The group discussions led to intriguing insights that contributed to addressing the main challenges listed above and stimulated future collaborations (see the Workgroup Reports). Here we highlight one key insight of each group. Group 1 developed a dynamic model of trust with three stages, Build-Maintain-Repair, which evolve over time. Group 2 drafted a code of ethics for trustworthy conversational agents with eight provisions. Group 3 explored the ethics challenge of transparency from the perspective of conversational disclosure. Group 4 called for increased collaboration across research communities and industries to strengthen the technological basis for trust in conversational agents. Group 5 proposed a framework for integrating measurement of trusting beliefs and trusting behaviour. Group 6 analysed several aspects of multimodality to understand their possible effects on trust in conversational agents. Apart from the scientific programme, the Seminar organised several social events, including after-dinner wine and cheese gatherings, hiking in a nearby historic site, and a music event.

Overall, our Dagstuhl Seminar Trust-CA was considered a success. The major outputs were derived from the pre-Seminar survey (six research themes and a recommended reading list), twenty talks, and six multi-session breakout groups. Thanks must go to the enthusiastic involvement of all attendees in analysing various aspects of the burgeoning topic of conversational agents. Of course, the Seminar could only take place with the generosity of Schloss Dagstuhl - Leibniz Center for Informatics. The efficiency and friendliness of the scientific and administrative staff of Schloss Dagstuhl was much appreciated by the organisers and all attendees.

Copyright Effie Lai-Chong Law, Asbjorn Folstad, Jonathan Grudin , and Björn Schuller

Teilnehmer
Vor Ort
  • Elisabeth André (Universität Augsburg, DE) [dblp]
  • Oliver Bendel (FH Nordwestschweiz - Windisch, CH) [dblp]
  • Leigh Clark (Swansea University, GB) [dblp]
  • Asbjorn Folstad (SINTEF - Oslo, NO) [dblp]
  • Frode Guribye (University of Bergen, NO) [dblp]
  • Sebastian Hobert (Georg August Universität - Göttingen, DE) [dblp]
  • Andreas Kilian (Universität des Saarlandes - Saarbrücken, DE)
  • Dimosthenis Kontogiorgos (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
  • Matthias Kraus (Universität Ulm, DE) [dblp]
  • Guy Laban (University of Glasgow, GB) [dblp]
  • Effie Lai-Chong Law (Durham University, GB) [dblp]
  • Minha Lee (TU Eindhoven, NL) [dblp]
  • Clayton Lewis (University of Colorado - Boulder, US) [dblp]
  • Birthe Nesset (Heriot-Watt University - Edinburgh, GB) [dblp]
  • Catherine Pelachaud (Sorbonne University - Paris, FR) [dblp]
  • Martin Porcheron (Swansea University, GB) [dblp]
  • Stefan Schaffer (DFKI - Berlin, DE) [dblp]
  • Ryan Schuetzler (Brigham Young University - Provo, US) [dblp]
  • Björn Schuller (Universität Augsburg, DE) [dblp]
  • Eren Yildiz (University of Umeå, SE) [dblp]
Remote:
  • Theo Araujo (University of Amsterdam, NL) [dblp]
  • Susan Brennan (Stony Brook University, US) [dblp]
  • Heloisa Candello (IBM Research - Sao Paulo, BR) [dblp]
  • Ana Paula Chaves (Federal University of Technology - Paraná, BR) [dblp]
  • Cristina Conati (University of British Columbia - Vancouver, CA) [dblp]
  • Benjamin Cowan (University College - Dublin, IE) [dblp]
  • Laurence Devillers (CNRS - Orsay, FR & Sorbonne University - Paris, FR) [dblp]
  • Jasper Feine (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Jonathan Grudin (Microsoft - Redmond, US) [dblp]
  • Evelien Heyselaar (Radboud University Nijmegen, NL) [dblp]
  • Soomin Kim (Seoul National University, KR) [dblp]
  • Stefan Kopp (Universität Bielefeld, DE) [dblp]
  • Yi-Chieh Lee (NTT - Kyoto, JP) [dblp]
  • Oliver Lemon (Heriot-Watt University - Edinburgh, GB) [dblp]
  • Q. Vera Liao (IBM TJ Watson Research Center - White Plains, US) [dblp]
  • Christine Liebrecht (Tilburg University, DE) [dblp]
  • Roger K. Moore (University of Sheffield, GB) [dblp]
  • Stefan Morana (Universität des Saarlandes - Saarbrücken, DE) [dblp]
  • Cosmin Munteanu (University of Toronto Mississauga, CA) [dblp]
  • Ana Paiva (INESC-ID - Porto Salvo, PT) [dblp]
  • Symeon Papadopoulos (CERTH - Thessaloniki, GR) [dblp]
  • Caroline Peters (Universität des Saarlandes - Saarbrücken, DE)
  • Rolf Pfister (Cognostics - Pullach, DE)
  • Olivier Pietquin (Google - Paris, FR) [dblp]
  • Aleksandra Przegalinska (Kozminski University, PL) [dblp]
  • Elayne Ruane (University College Dublin, IE) [dblp]
  • Marita Skjuve (SINTEF - Oslo, NO) [dblp]
  • Cameron Taylor (Google - London, GB) [dblp]
  • Ricardo Usbeck (Universität Hamburg, DE) [dblp]
  • Margot van der Goot (University of Amsterdam, NL) [dblp]
  • Dakuo Wang (IBM T.J. Watson Research Center - Yorktown Heights, US) [dblp]
  • Saskia Wita (Universität des Saarlandes - Saarbrücken, DE)
  • Levi Witbaard (OBI4wan - Zaandam, NL)
  • Zhou Yu (Columbia University - New York, US) [dblp]
  • Michelle X. Zhou (Juji Inc. - Saratoga, US) [dblp]

Klassifikation
  • Artificial Intelligence
  • Human-Computer Interaction
  • Robotics

Schlagworte
  • Conversational agents
  • Trust
  • Trustworthiness
  • Human-chatbot collaboration
  • Voice emotion