TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 26461

Technology, Trust, Truth: The Role of Responsible AI in Combating Misinformation

( Nov 08 – Nov 13, 2026 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/26461

Organizers
  • Kalina Bontcheva (University of Sheffield, GB)
  • Gianluca Demartini (University of Queensland - Brisbane, AU)
  • Iryna Gurevych (TU Darmstadt, DE)
  • Arkaitz Zubiaga (Queen Mary University of London, GB)

Contact

Motivation

The increasingly dominant role of Artificial Intelligence (AI) in content moderation, fact-checking, and misinformation mitigation has introduced new challenges and opportunities for research at the intersection of AI, information retrieval, computational social science, and ethics. The rapid expansion of Large Language Models (LLMs), retrieval-augmented generation (RAG) systems, and multimodal AI has enabled sophisticated misinformation detection techniques, but these technologies have also contributed to new risks, including the mass production of synthetic misinformation, algorithmic biases, and unintended societal consequences. Addressing these challenges requires an interdisciplinary approach that considers the technical, human, and institutional dimensions of AI-driven misinformation governance. As major platforms and policymakers—particularly in the U.S.—scale back their efforts to combat misinformation, this seminar will explore how research can proactively develop strategies to mitigate misinformation in an environment where key industry players may deprioritize or disregard the issue. With a focus on independent, research-driven solutions, discussions will focus on advancing technical methodologies, understanding human interaction with misinformation, and defining the role of academia in shaping effective, long-term countermeasures.

Technical Foundations of Misinformation Detection.

Advances in natural language processing and machine learning, explainable AI, fact-verification models, and RAG systems have improved misinformation identification, yet key issues remain. Algorithmic biases in training data can influence detection performance, leading to disparities across languages, communities, and political perspectives. Adversarial manipulation techniques challenge the robustness of AI-based moderation, requiring more adaptive and context-aware models. As noted above, Generative AI has also introduced new challenges since it is also a powerful tool for producing highly convincing fake content, including deep fake images, videos, and AI-generated propaganda.

Human and Societal Perspective.

Misinformation is deeply intertwined with human behavior, cognitive biases, and the dynamics of online discourse. Research in computational social science and psychology has highlighted how misinformation spreads, how people engage with fact-checking interventions, and how trust in information sources is formed. AI-driven fact-checking and content moderation systems must account for user interaction patterns, resistance to correction, and the psychological effects of misinformation exposure. Human-in-the-loop approaches, such as crowdsourced fact-checking, expert validation, and AI-assisted journalistic workflows provide valuable countermeasures but introduce their own challenges, including scalability, reliability, and potential biases.

Translating Misinformation Detection to Practice.

The development of AI-driven misinformation mitigation tools is primarily shaped by academic research, which focuses on misinformation detection, bias mitigation, and explainability. Unlike industry, where priorities often focus on scalability, automation, and engagement metrics, academic research provides a foundation for independent, transparent, and ethical approaches to combating misinformation. However, research institutions often grapple with the challenge of translating theoretical advancements into practical, real-world deployment, particularly in an environment where key industry players deprioritize misinformation moderation. This Dagstuhl Seminar will explore how academia can take a leading role in developing misinformation countermeasures, fostering open research collaborations, and ensuring that scientific progress translates into actionable, societal impact despite the shifting priorities of major platforms.

Copyright Kalina Bontcheva, Gianluca Demartini, Iryna Gurevych, and Arkaitz Zubiaga

Classification
  • Artificial Intelligence
  • Information Retrieval

Keywords
  • misinformation
  • artificial intelligence