Dagstuhl Seminar 26481
Deep Learning on Structured Data: Expressivity, Generalization, and Convergence
( Nov 22 – Nov 27, 2026 )
Permalink
Organizers
- Floris Geerts (University of Antwerp, BE)
- Christopher Morris (RWTH Aachen, DE)
- Balder Ten Cate (University of Amsterdam, NL)
- Jonni Virtema (University of Glasgow, GB)
- Ellen Vitercik (Stanford University, US)
Contact
- Michael Gerke (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Learning from structured data such as graphs and relational structures has become a central challenge in modern machine learning. Graph Neural Networks (GNNs) and related architectures have emerged as powerful tools for this setting and have enabled progress in areas such as molecular modeling, knowledge graphs, recommendation systems, and scientific discovery. Their growing practical importance has also led to a surge of theoretical work aimed at understanding their capabilities and limitations. Recent research has connected the expressive power of GNNs to tools from combinatorics, graph isomorphism, and fragments of first order logic, providing a principled view of what these models can represent.
Despite this progress, our theoretical understanding remains incomplete. Existing results on expressiveness and generalization are often relatively coarse and primarily capture graph structure while abstracting away many architectural and training choices that are crucial in practice. Moreover, much of the current theory is developed in restricted regimes, for example assuming graphs of fixed size, and therefore does not fully reflect the settings encountered in real applications. Similarly, although initial work has begun to analyze the training dynamics of neural networks and GNNs, the interaction between stochastic gradient descent, graph structure, and architectural design is still poorly understood. Recent perspectives such as algorithmic alignment, which view neural networks as implementations of computational procedures, suggest promising directions for improving sample efficiency and generalization, yet the broader theoretical relationship between alignment and learning performance remains largely unexplored.
This Dagstuhl Seminar aims to bring together researchers from machine learning, theoretical computer science, logic, and the mathematics of deep learning to develop a deeper and more unified understanding of neural architectures for structured data. By fostering dialogue between communities that traditionally study expressiveness, learning theory, and optimization dynamics separately, the seminar seeks to identify new mathematical tools and conceptual frameworks for analyzing modern graph neural architectures and their training processes. Ultimately, the goal is to advance a more fine-grained theory that explains empirical observations and helps guide the design of future models for structured data.
Floris Geerts, Christopher Morris, Balder Ten Cate, Jonni Virtema, and Ellen Vitercik
Classification
- Artificial Intelligence
- Logic in Computer Science
- Machine Learning
Keywords
- Graph neural networks
- Expressiveness
- Training and optimization
- Learning theory
- Generalization

Creative Commons BY 4.0
