Dagstuhl Seminar 26191
Large Language Models for Evolutionary Computation
( May 03 – May 08, 2026 )
Permalink
Organizers
- Zhaochun Ren (Leiden University, NL)
- Roman Senkerik (Tomas Bata University in Zlín, CZ)
- Niki van Stein (Leiden University, NL)
- Qingfu Zhang (City University of Hong Kong, HK)
Contact
- Marsha Kleinbauer (for scientific matters)
- Jutka Gasiorowski (for administrative matters)
Large Language Models (LLMs) have recently made extraordinary strides in generative AI, demonstrating remarkable abilities in tasks ranging from code completion to automated content creation. These advances also create exciting possibilities for the field of Evolutionary Computation (EC), where algorithm design and adaptation have traditionally relied on manual engineering of operators such as selection, mutation, and crossover. With LLMs, we now have a novel mechanism to automatically generate, refine, and optimize evolutionary algorithms themselves.
This seminar brings together researchers and practitioners from both Evolutionary Computation and Large Language Models to explore how the two fields can benefit from one another. By focusing on the direction of LLMs for Evolutionary Computation we will delve more deeply into the potential of LLMs to augment, improve or create new metaheuristics. Core topics include:
- Algorithm Synthesis and Refinement: Using LLMs to propose variants of evolutionary operators, adapt them to different problem classes, and iteratively improve algorithmic performance. For example, by leveraging evolutionary techniques such as mutation, crossover, and selection to make the LLM-driven search for new metaheuristics as efficient as possible.
- Benchmarking and Evaluation: Developing robust metrics and methodologies for assessing LLM-generated algorithms—particularly in ensuring consistent performance, reproducibility, and fair comparisons against established baselines as well as assessing their novelty with respect to the state-of-the-art.
- Practical Applications: Highlighting domains such as combinatorial optimization and complex engineering problems where LLM-driven evolutionary methods can yield innovative, real-world solutions. For example, engineering design, automotive crash-worthiness optimization, material science, and many others.
- LLMs as EC: Exploring how LLMs can act as optimization engines themselves, leveraging in-context learning and generative capabilities to guide solution sampling and adaptation.
- Overview and presentation of available or recently introduced open-source tools and frameworks suitable for iterative refinement, design and benchmarking of LLM-EC algorithms.
- Ethical and Regulatory Considerations: Addressing responsible use of generative AI, with special attention to maintaining transparency, fairness, and compliance under evolving guidelines such as the EU Artificial Intelligence Act.
By bringing together experts in LLM technologies, evolutionary optimization, machine learning, and related fields, this Dagstuhl Seminar aims to (1) consolidate the emerging understanding of how best to harness LLMs in EC and other related areas, (2) foster new collaborative projects around LLM-powered algorithm design, and (3) produce actionable guidelines for benchmarking and future research directions. Above all, the seminar will serve as a catalyst for the next generation of optimization methods in which evolutionary techniques and advanced language models combine to tackle some of the most challenging problems in science and industry.

Classification
- Artificial Intelligence
- Machine Learning
- Neural and Evolutionary Computing
Keywords
- large language models
- automated algorithm design
- evolutionary computation
- optimization
- iterative optimization heuristics