Dagstuhl-Seminar 25022
Towards a Multidisciplinary Vision for Culturally Inclusive Generative AI
( 06. Jan – 09. Jan, 2025 )
Permalink
Organisatoren
- Asia Biega (MPI-SP - Bochum, DE)
- Georgina Born (University College London, GB)
- Fernando Diaz (Carnegie Mellon University - Pittsburgh, US)
- Mary L. Gray (Microsoft New England R&D Center - Cambridge, US)
- Rida Qadri (Google - San Francisco, US)
Kontakt
- Michael Gerke (für wissenschaftliche Fragen)
- Jutka Gasiorowski (für administrative Fragen)
Gemeinsame Dokumente
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Programm
Motivation
Recent years have seen rapid development and widespread adoption of generative AI systems that algorithmically model human creativity and decision-making. In particular, this technological shift has profound implications for how cultural artifacts like music, news, literature, and film are produced and consumed, raising concerns about the potential cultural implications of this technology. At the same time, these technologies are displaying Western-centrism in AI training and evaluation data, definitions of "success", and evaluation methods. As a result, generative AI technologies, while arguably improving their reliability for basic output of sensible prose and images, have a recognizable pattern of failing to generate norms and values representative and inclusive of non-Western perspectives. For example, recent research and media reports have found that models are less competent at generating culturally significant material outside of a Western point of view, frequently omitting non-Western cultural knowledge from outputs, and perpetuating Western stereotypes in generated output. Addressing these failures and their broader impact is crucial to prevent globally-launched generative AI tools from becoming vehicles for reinforcing Western-centric cultural norms and values, production and distribution methods, and in these ways further exacerbating global inequities.
The urgent need for a seminar on these topics was highlighted in the first-of-its-kind 2022 NeurIPS workshop on "AI and Culture" that brought together researchers from computer science, the humanities, and the social sciences at the premier conference of AI researchers and practitioners. At this workshop, emergent conversations pointed to how building culturally sensitive, responsive, and accountable AI systems will require researchers and engineers to include diverse disciplinary voices, community expertise, and cultural knowledge in AI research and development. Such efforts to recognize and incorporate myriad cultural contexts into AI systems are often siloed within disciplines and, as a result, are disjointed and limited in their impact on technological design. In particular, there are no cohesive frameworks to help researchers fold nuanced cultural analyses and situated knowledge into generative AI models. There is therefore a critical, currently unmet need to break down disciplinary silos and create coherent interdisciplinary conceptual foundations for novel, culturally-sensitive generative AI research and practices. The most promising areas in need of interdisciplinary collaborative research include: 1) new approaches to data collection; 2) interdisciplinary frameworks and methods for model development and deployment; and 3) new techniques that integrate and distinguish the value of qualitative and computational approaches to evaluation. We also see the need to develop new interdisciplinary methods, crossing between qualitative and quantitative approaches, to study the societal impacts of generative AI. As generative AI research is currently confined mainly to industry-academic collaborations, we further aim to broaden the contextual and institutional perspectives brought to these challenges beyond academia to include voices from civil society and impacted communities - and this was also a goal achieved in the seminar.
Program
The seminar lasted 2.5 days. As our goal was to create an interdisciplinary space for discussion, we had 28 participants with backgrounds in multiple disciplines and sectors. Participants included experts in computer science, data science, machine learning (ML), information retrieval (IR), natural language processing (NLP), human-computer interaction (HCI), responsible artificial intelligence (AI), social computing, critical data studies, music and ethnomusicology, anthropology, history, political philosophy, science and technology studies (STS), media studies, communication, and architecture. The seminar also included contributions from filmmakers and the creative industry. The participant pool reflected the broad spectrum of perspectives and expertise on language, culture, and cultural production, necessary to advance the dialogue on AI and culture.
To encourage participants to come to the seminar prepared with preliminary reflections on the topic, we asked them to complete a round of preparatory work two months before the seminar. This consisted of sharing a paper that participants had written or found fruitful in their current work in order to introduce themselves to the rest of the group and explain their way of approaching questions of AI and culture. We also asked each participant to reflect on a series of questions: 1) How are you thinking about the term "culture" in the context of artificial intelligence? 2) What is a provocation or critical question you would like to share regarding the intersection of AI and culture? And, 3) Where would you like to see the field of AI and culture head next?
The first day of the seminar was dedicated to sparking discussion and allowing participants to get to know each other’s perspectives on the seminar topic. Recognizing that most of the invited scholars do not regularly cross paths at a single-disciplinary home conference, the first exercise of the day was a series of "speed dating" rounds. Participants rotated through ten-minute introductory conversations with at least three other participants. We asked participants to share basic information about their disciplinary training and home institutions. Then they added background on what they hoped to gain in terms of a deeper understanding of the interdisciplinary challenges and opportunities in the emerging field of generative AI and cultural diversity. During Round 2, participants shared the next project they were working on or their dream project in this space. In Round 3, they discussed examples of AI failures that illustrated their thinking on the seminar topic. These discussions helped to identify the first key areas for the advancement of the field.
Once participants had a sense of the breadth and depth of expertise in the room, we shifted to the first substantive programming component. This took the form of three panels each with three speakers, with each speaker offering 5-minute "firestarter" provocations, followed by an open group discussion. The firestarter presentations were followed by a short, individual reflective writing session, where participants could document their questions and reactions to the discussions, and contribute to a collective note-taking document. The nine speakers were selected to give firestarter talks on the basis of the submitted preparatory work. The organizers conducted thematic coding of the received documents ahead of the seminar in order to assemble the panels, with the aim of putting participants into multi- and interdisciplinary dialogue early on in the seminar.
On the second day of the seminar, participants were invited to collectively come up with themes they wanted to discuss further. They then broke into small group discussions on the chosen themes. The small group discussions were followed by shareout sessions, followed by the generation of provocations, and the genesis of potential future collaborative projects among participants. As noted above, we used the themes and points of friction from Day 1’s Firestarter discussions and individual reflections to brainstorm and then thematically code the questions and clusters of discussion that had the most consensus. After spending the morning consolidating themes, we identified and converged on three clusters for small group discussion. The themes were articulated as: Discussion Cluster 1: Power, Future, History; Discussion Cluster 2: Interdisciplinarity in Computer Science Cultures; and Discussion Cluster 3: Culture Encodability. Each Discussion Cluster is described in the full report.
On the third day of the seminar, participants discussed the next steps in small groups.
Outcomes / Planned outcomes
- Agenda-Setting Project
-
This initiative aims to establish a shared, nuanced understanding of AI, culture, and technology, moving beyond simplistic definitions. The group will produce a document for funders like the NSF, articulating the challenges and relevance of culturally inclusive AI. This document will influence funding priorities and foster interdisciplinary research, serving as a foundation for broader engagement with policymakers and the public.
- Meta-Metadata Project
-
This research project focuses on developing and implementing new approaches to metadata creation and management, fostering culturally rich datasets through open-source, collaborative models. Key planned outcomes include a course and hackathon exploring nuanced metadata encoding, and the creation of a network of scholars dedicated to this work. The project also explores leveraging existing platforms like Wikimedia to host and manage detailed, culturally diverse metadata, addressing challenges like image metadata and incentivizing scholarly participation.
- Project on Integrating Qualitative and Quantitative Methods for AI Evaluations
-
This future project addresses the critical need for robust methodologies that integrate qualitative and quantitative data in AI evaluation. It aims to develop frameworks that translate qualitative insights into concrete algorithmic interventions without losing critical nuances. The research will explore methods like “fictions” or imagined scenarios to anticipate potential consequences, and guide development, moving beyond the limitations of current practices that rely on small user groups or subjective judgments.
Beyond these tangible projects, the seminar achieved a significant shift in perspectives. Computer scientists gained a deeper appreciation for the complexities of culture, while social scientists and humanities scholars refined their critiques through a clearer understanding of AI's potential. This cross-disciplinary dialogue led to a richer understanding of the multi-layered relationship between AI and culture, moving beyond simplified encodings and benchmarks. Participants also valued the seminar's global representation, moving beyond US/EU-centric viewpoints. The seminar generated significant momentum, energizing participants and sparking new collaborative research directions. As one computer scientist noted, "The questions I came in with are very different from the questions I’m leaving with... I find that the questions I leave with are much richer - and harder." Similarly, a social scientist expressed, "As a non-technical person the seminar was incredibly insightful to better understand what the state of the art currently is, what the possibilities and limitations for culturally sensitive interventions in these systems may be."
Participants consistently highlighted the need for a second iteration of the seminar, emphasizing the value of continued multidisciplinary spaces for collaboration. They left with a richer set of concerns and vocabularies, anticipating that this assemblage would transform individual disciplinary research and lead to numerous joint collaborations. The seminar was described as "creatively fortifying and vitalizing," creating meaningful connections and inspiring participants to push forward in the pursuit of culturally inclusive AI.
Rida Qadri, Asia Biega, Georgina Born, Fernando Diaz, and Mary L. Gray
Generative AI systems are rapidly being integrated into global systems of cultural communication, consumption, and production. As these technologies shape our cultures, we urgently need conceptual foundations for investigating the cultural inclusivity of generative AI pipelines (from data collection, to model development and deployment, to evaluation), as well as methods to study the varying societal and cultural impacts of generative AI.
This Dagstuhl Seminar wants to bring together scholars and practitioners from computer science, social sciences, the tech industry, and creative industries to discuss the cultural implications of generative AI and find paths toward building generative AI that can be responsive to the diverse needs of individuals, groups, and societies around the world. Together, seminar participants will build shared language and frameworks for reshaping the technical and social architectures of generative AI.
The seminar will be structured along three main dimensions for interdisciplinary discussions:
- Examining the cultural values being currently centered in generative AI.
- Studying the possibilities and risks of encoding cultural knowledge into generative AI technologies.
- Understanding the cultural impact of these technologies.
We aim to build a network committed to understanding and designing a culturally-attuned generative AI and to lay the foundation for an interdisciplinary research and practice agenda on global inclusion and generative AI.
Asia J. Biega, Georgina Born, Fernando Diaz, Mary L. Gray, and Rida Qadri
Please log in to DOOR to see more details.
- Virgilio Almeida (Federal University of Minas Gerais-Belo Horizonte, BR) [dblp]
- Elisabeth André (Universität Augsburg, DE) [dblp]
- Naveen Bagalkot (Manipal Academy of Higher Education - Bangalore, IN)
- Kalika Bali (Microsoft Research India - Bangalore, IN) [dblp]
- Asia Biega (MPI-SP - Bochum, DE) [dblp]
- Tobias Blanke (University of Amsterdam, NL)
- Georgina Born (University College London, GB) [dblp]
- Anita Say Chan (University of Illinois at Urbana Champaign, US)
- Marc Cheong (The University of Melbourne, AU)
- Beth Coleman (University of Toronto, CA)
- Catherine d`Ignazio (MIT - Cambridge, US) [dblp]
- Hal Daumé III (University of Maryland - College Park, US) [dblp]
- Fernando Diaz (Carnegie Mellon University - Pittsburgh, US) [dblp]
- Giovanna Fontenelle (Wikimedia - Sao Paulo, BR)
- Tarleton Gillespie (Microsoft New England R&D Center - Cambridge, US) [dblp]
- Mary L. Gray (Microsoft New England R&D Center - Cambridge, US) [dblp]
- Huma Gupta (MIT - Cambridge, US)
- Sara Hooker (Cohere For AI - Toronto, CA) [dblp]
- Maurice Jones (Concordia University - Montreal, CA) [dblp]
- Emanuel Moss (Intel - Santa Clara, US) [dblp]
- Maryam Mustafa (LUMS - Lahore, PK) [dblp]
- Alice Oh (KAIST - Daejeon, KR) [dblp]
- Rida Qadri (Google - San Francisco, US) [dblp]
- Noopur Raval (University of California at Los Angeles, US)
- Darci Sprengel (King's College - London, GB)
- Molly Steenson (American Swedish Insitute - Minneapolis, US & Carnegie Mellon University - Pittsburgh, US) [dblp]
- Harini Suresh (Brown University - Providence, US)
- Moira Weigel (Harvard University - Cambridge, US)
Klassifikation
- Artificial Intelligence
- Computers and Society
- Human-Computer Interaction
Schlagworte
- generative artificial intelligence
- cultural inclusion
- creativity
- social impact of AI
- Global south

Creative Commons BY 4.0
