Dagstuhl Seminar 25301
Linguistics and Language Models: What Can They Learn from Each Other?
( Jul 20 – Jul 25, 2025 )
Permalink
Organizers
- Anna Rogers (IT University of Copenhagen, DK)
- Nathan Schneider (Georgetown University - Washington, DC, US)
- Bonnie Webber (University of Edinburgh, GB)
Contact
- Michael Gerke (for scientific matters)
- Christina Schwarz (for administrative matters)
Dagstuhl Seminar Wiki
- Dagstuhl Seminar Wiki (Use personal credentials as created in DOOR to log in)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
- Upload (Use personal credentials as created in DOOR to log in)
In a little over a year since the release of ChatGPT, language models (LMs) have stirred concerns in government, over the possibility that citizens will come to believe the textual and spoken output of such models. Similarly, they have caused panic in education, forcing a rethink of what students are learning and how to assess it. Of concern to us here, is whether LMs mean the end of computational and/or cognitive models of human language learning and language use. Does the practical success of LMs mean that computational linguistics (and perhaps even linguistics itself) is no longer relevant? Or are we missing problems with LMs that computational linguistics (and linguistics more generally) could help us both recognize and surmount?
To have any hope of answering big questions about this technology, we need to foster interdisciplinary conversations and collaborations across the fields of machine learning, NLP, linguistics, and cognitive science. This Dagstuhl Seminar aims to facilitate such conversations and collaborations among senior experts and rising stars. In particular, the seminar poses five key questions for discussion:
- What evidence, if any, do LMs provide about human language, world knowledge, and/or cognition?
- How can LMs be used as tools for empirical research in linguistics?
- How can linguistics be brought to bear on interpreting the operation of LMs?
- How can linguistically-oriented perspectives enhance or complement LMs for greater reliability and robustness?
- What is the appropriate framing of LM-functionality, for scientists and the public?
Seminar outcomes could include joint publications that advance scientific and public understanding of language and LMs, setting the agenda for the next generation of research and development in NLP, linguistics, and cognitive science.

Please log in to DOOR to see more details.
- Omri Abend
- David Adelani
- Antonios Anastasopoulos
- Gašper Beguš
- Verena Blaschke
- Ryan Cotterell
- Marie-Catherine de Marneffe
- Katherine Demuth
- A. Seza Dogruöz
- Robert Frank
- Juan Luis Gastaldi
- Adele Goldberg
- Aurelie Herbelot
- Yu-Yin Hsu
- Mark Johnson
- Najoung Kim
- Alessandro Lenci
- Lori Levin
- Roger Levy
- Xixian Liao
- Kyle Mahowald
- Tom McCoy
- Joakim Nivre
- Alexis M. Palmer
- Christopher Potts
- Jakob Prange
- Siva Reddy
- Philip Resnik
- Anna Rogers
- Rachel Rudinger
- Gözde Gül Sahin
- Asad Sayeed
- Nathan Schneider
- Noah A. Smith
- Mark Steedman
- Tiago Torrent
- Bonnie Webber
- Ethan Wilcox
- Adina Williams
- Amir Zeldes
Classification
- Computation and Language
Keywords
- Linguistic theory
- Cognitive modelling
- Language models