TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 21402

Digital Disinformation: Taxonomy, Impact, Mitigation, and Regulation

( 03. Oct – 06. Oct, 2021 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/21402

Organisatoren

Kontakt


Motivation

The phenomenon of intentionally false, misleading information is as old as human communication. The ubiquitous digital disinformation and targeted disinformation campaigns that we are globally experiencing is however new. Scientific and technological progresses together with today’s worldwide technical platforms enable and support the creation and spread of disinformation at a vastly increased speed and scale. Researchers have observed attempted disinformation campaigns from political parties, private actors, and foreign states in all of the recent elections, including the US presidential election and the Brexit referendum in 2016. New phenomena such as deep fakes could further erode trust in public information while micro-targeting undermines the concepts of a shared reality and digital public sphere.

Technological and self-regulatory mitigation approaches by the platforms have so far proved insufficient to address the problem. Meanwhile, national policy makers have initiated regulatory responses to the phenomenon including prohibition of unverified accounts, automated agents on social platforms, transparency requirements for targeted political ads, or even criminalization of disinformation and measures to restrict nationwide access to social platforms. All of these regulations may have detrimental effects on the freedom of speech, access to information, online privacy, and psychological chilling effects on user behaviour and users’ trust in information sources.

This Dagstuhl Seminar brings together researchers and practitioners from several scientific disciplines who are currently observing, analysing, and trying to mitigate the phenomenon from different angles: Informatics, psychology and cognitive science, social and communication science, political science, journalism, ethics, and law. This research expertise should be combined to better understand and holistically address the complex global phenomenon of digital disinformation. By bringing together researchers and practitioners from multiple disciplines and various countries, will allow us to share and discuss multidisciplinary approaches, methods, scientific results, and experiences from varying national and cultural contexts.

The intimacy of the Dagstuhl venue is perfect for constructive communication and interdisciplinary exchange. We plan to work on a common taxonomy and share data and results to identify the most pressing research questions, propose new developments, and initiate interdisciplinary follow-up projects. The multidisciplinary exchange will also enable us to assess different proposed technological, regulatory and social responses to disinformation with regard to their benefits and shortcomings. These discussions shall foster future interdisciplinary works and publications on new mitigation concepts, user education, or policy briefs targeted at policy makers and social platforms.

Copyright Claude Kirchner, Ninja Marnau, and Franziska Roesner

Summary

Dagstuhl Seminar #21402 on Digital Disinformation occurred on October 4-6, 2021. The seminar was initially planned by Claude Kirchner (CNPEN/CCNE & Inria), Ninja Marnau (CISPA), and Franziska Roesner (University of Washington), and it was then co-lead and this report was written by Kirchner and Roesner, with input from other seminar participants. The seminar had been originally planned for June of 2020 but was then postponed due to the COVID-19 pandemic. It was held in a hybrid format, with some participants on-site in Dagstuhl and most others joining remotely via the video conferencing system Zoom.

In order to maximize discussion and allow the interests of the group to drive the direction of the seminar, we did not plan for formal talks. Participants were asked to prepare a single slide, few-minute introduction about their research interests and methodologies related to digital disinformation, and a !burning question" they have in the space.

Participants included the following individuals, spanning a range of expertise from computer science to law:

  • Esma Aïmeur (University of Montrèal, Canada)
  • Jos Baeten (CWI Amsterdam, Netherlands)
  • Asia Biega (Max Planck Institute for Security and Privacy, Germany)
  • Camille Darche (CNPEN, Inria and Université Paris Nanterre, France)
  • Sébastien Gambs (Université du Québec à Montréal, Canada)
  • Krishna Gummadi (Max Planck Institute for Software Systems, Germany)
  • Claude Kirchner (CNPEN/CCNE and Inria, France)
  • Vladimir Kropotov (Trend Micro, Russia)
  • Jean-Yves Marion (Lorraine University, France)
  • Evangelos Markatos (University of Crete, Greece)
  • Fil Menczer (Indiana University, USA)
  • Trisha Meyer (Vrije Universiteit Brussel, Belgium)
  • Franziska Roesner (University of Washington, USA)
  • Kavé Salamatian (University of Savoie, France)
  • Juliette Sénéchal (University of Lille, France)
  • Dimitrios Serpanos (University of Patras, Greece)
  • Serena Villata (CNRS, France)

Based on our preliminary discussions, we identified four topics of interest to many seminar participants: trustworthiness algorithms (i.e., how to build systems that assess trust automatically), friction as a technique in platform design (e.g., to allow for people to take time and a step back when consuming information on social media), the ethics of interventions (e.g., the ethics of blocking or content moderation), and how to educate users (e.g., without creating over-skepticism). We then structured the rest of the seminar around four deep-dive conversations on these topics, described in the subsequent sections of this report. Due to the relatively small size of the gathering, and most participants' broad interest in all four topics, we did not break out into smaller discussion groups but rather continued to discuss as a full group.

Copyright Claude Kirchner and Franziska Roesner

Teilnehmer
Vor Ort
  • Camille Darche (French Nat. Pilot Committee for Dig. Ethics- Paris, FR)
  • Lynda Hardman (CWI - Amsterdam, NL & Utrecht University, NL) [dblp]
  • Claude Kirchner (INRIA - Le Chesnay, FR) [dblp]
  • Jean-Yves Marion (CNRS - Nancy, FR) [dblp]
Remote:
  • Esma Aimeur (University of Montreal, CA) [dblp]
  • Jos Baeten (CWI - Amsterdam, NL)
  • Asia Biega (MPI-SP - Bochum, DE) [dblp]
  • Kalina Bontcheva (University of Sheffield, GB) [dblp]
  • Sébastien Gambs (University of Montreal, CA) [dblp]
  • Vladimir Kropotov (Trend Micro - Garching, DE) [dblp]
  • Evangelos Markatos (FORTH - Heraklion, GR) [dblp]
  • Filippo Menczer (Indiana University - Bloomington, US) [dblp]
  • Trisha Meyer (Free University of Brussels, BE) [dblp]
  • Franziska Roesner (University of Washington - Seattle, US) [dblp]
  • Kavé Salamatian (University of Savoie - Annecy le Vieux, FR) [dblp]
  • Juliette Sénéchal (University of Lille, FR)
  • Dimitrios Serpanos (ATHENA Research Center - Patras, GR) [dblp]
  • Peggy Valcke (KU Leuven, BE) [dblp]
  • Serena Villata (Université Côte d’Azur - Sophia Antipolis, FR) [dblp]

Klassifikation
  • artificial intelligence / robotics
  • society / human-computer interaction
  • world wide web / internet

Schlagworte
  • disinformation
  • fake news
  • artificial intelligence
  • trust in media
  • geopolitics