21. – 26. Februar 2021, Dagstuhl-Seminar 21081

Challenges in Benchmarking Optimization Heuristics


Anne Auger (INRIA Saclay – Palaiseau, FR)
Peter A. N. Bosman (CWI – Amsterdam, NL)
Pascal Kerschke (Universität Münster, DE)
Darrell Whitley (Colorado State University – Fort Collins, US)

Auskunft zu diesem Dagstuhl-Seminar erteilen

Annette Beyer zu administrativen Fragen

Shida Kunz zu wissenschaftlichen Fragen


Programm des Dagstuhl-Seminars (Hochladen)

(Zum Einloggen bitte persönliche DOOR-Zugangsdaten verwenden)


Benchmarking, i.e., determining how well and/or fast algorithms solve particular classes of problems, is a cornerstone of research on optimization algorithms. This is crucial in helping practitioners choose the best algorithm for a real-world application. Although these applications (e.g., from energy, healthcare, or logistics) pose many challenging optimization problems, algorithmic design research typically evaluates performance on sets of artificial problems. Such problems are often designed from a theoretical perspective to study specific properties of algorithm and problem classes. However, some benchmarks lack connections to real-world problems. Consequently, extrapolating meaningful implications for the performance of these algorithms to real-world applications can be difficult. It is therefore critical to ensure that well-known, easily-accessible and frequently-used benchmarks are unbiased (w.r.t. structural properties), but also to ensure that benchmarks are aligned with real-world applications.

On the other hand, literally thousands of papers are published each year making some claim of having developed a better optimization tool for some application domain. And yet, the overwhelming majority of these enhancements are never adequately tested. Indeed, in many cases, it is highly likely that the only individuals who will execute and evaluate a new algorithm are the authors who created it. This is a source of inefficiency when it comes to how research is executed and evaluated in the field of optimization. This problem can only be resolved by using proper benchmarking.

In this context, the present Dagstuhl Seminar is centered on benchmarking. It brings together a selected list of international experts with different backgrounds and from various application areas (e.g., computer science, machine learning, engineering, statistics, mathematics, operations research, medicine, as well as industrial applications) with the overall objective of

  1. analyzing, comparing and improving the general understanding of the status quo and caveats of benchmarking in different subdomains of optimization and related research fields, and
  2. developing principles for improving our current benchmarks, as well as for designing new, real-world relevant, benchmark problems.

The seminar proposes to address challenges that the current state of benchmarking still faces throughout various areas of optimization-related research. The following (non-exhaustive) list of topics will be at the core of the seminar:

  • the integration and/or inclusion of (ideally, scalable) real-world problems
  • a benchmark’s capability to expose the generality vs. specificity trade-off of optimization heuristics
  • approaches to measure and demonstrate algorithmic performance
  • ways to measure problem characteristics by means of (problem-specific) features
  • novel visualization methods, which improve our understanding of the optimization problem itself and/or the behavior of the optimization heuristic that is operating on the problem

In order to achieve a fruitful and productive collaboration of all participants, this seminar will follow a very interactive format (complemented by a very limited number of invited presentations). The main focus will be on spotlight discussions and breakout sessions, while providing sufficient time for informal discussions.

Motivation text license
  Creative Commons BY 3.0 DE
  Anne Auger, Peter A. N. Bosman, Pascal Kerschke, and Darrell Whitley


  • Data Structures And Algorithms
  • Neural And Evolutionary Computing
  • Performance


  • Benchmarking
  • Optimization
  • Real-world applications
  • Design of search heuristics
  • Understanding problem complexity


In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Organisatoren stellen zusammen mit dem Collector des Seminars einen Bericht zusammen, der die Beiträge der Autoren zusammenfasst und um eine Zusammenfassung ergänzt.


Download Übersichtsflyer (PDF).


Es besteht weiterhin die Möglichkeit, eine umfassende Kollektion begutachteter Arbeiten in der Reihe Dagstuhl Follow-Ups zu publizieren.

Dagstuhl's Impact

Bitte informieren Sie uns, wenn eine Veröffentlichung ausgehend von
Ihrem Seminar entsteht. Derartige Veröffentlichungen werden von uns in der Rubrik Dagstuhl's Impact separat aufgelistet  und im Erdgeschoss der Bibliothek präsentiert.