TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 17191

Theory of Randomized Optimization Heuristics

( 07. May – 12. May, 2017 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/17191

Organisatoren

Kontakt


Impacts

Motivaton

Randomized search and optimization heuristics such as evolutionary algorithms, ant colony optimization, particle swarm optimization, and simulated annealing, have become established problem solvers. They have successfully been applied to a wide range of real-world applications, and they are applicable to problems that are non-continuous, multi-modal, and/or noisy as well as to multi-objective and dynamic optimization tasks.

The goal of the seminar is to advance the theory underlying randomized search and optimization heuristics in order to gain a better understanding of the algorithms and to develop new and more powerful methods in a principled way. The seminar will cover all important streams of research in the theory of randomized search and optimization heuristics with a focus on selected emergent topics, such as dynamic optimization problems and precise runtime bounds.

Our seminar invites researchers from the machine learning community working on Bayesian optimization and Monte Carlo methods for optimization. We are looking forward to discussing the similarities and differences of the approaches and to bridge the gap between the different communities involved.

The seminar continues the successful series of “Theory of Evolutionary Algorithms” Dagstuhl seminars, where the change in title reflects the development of the research field toward a broader range of heuristics.


Summary

Randomized search and optimization heuristics such as evolutionary algorithms, ant colony optimization, particle swarm optimization, and simulated annealing, have become established problem solvers. They have successfully been applied to a wide range of real-world applications, and they are applicable to problems that are non-continuous, multi-modal, and/or noisy as well as to multi-objective and dynamic optimization tasks. Theory of randomized optimization heuristics aims at providing mathematically founded insights into the working principles of these general-purpose problem solvers, and at developing new and more powerful heuristic optimization methods in a principled way. The seminar has covered several important streams in this research discipline. Among several other topics, extended discussions have been held on the advantages of population-based heuristics and of non-static parameter choices, optimization problems with constrains, as well as existing and possible connections to research in machine learning.

The seminar continues to be one of the key stimulator for novel ideas, tools, and approaches in the theory of randomized optimization heuristics. Accordingly, the acceptance rate for the invitations has been staying at a very high level.

Topics

The research in theory of randomized optimization heuristics is as broad as the applicability of these methods. The seminar succeeded in covering the various theoretical approaches. There was a focus on important cross-cutting topics, which we briefly outline in the following. One of the most prominent research areas in the theory of randomized optimization heuristics deals with runtime and convergence analysis, aiming at proving bounds on the speed of the convergence to an optimal solution. Typical questions concern the advantages of certain algorithmic choices, such as

  • the size of the memory (population),
  • the usage of different sampling strategies (variation of previously sampled search points, in particular via mutation of one previously evaluated solution candidate and recombination of two or more previous search points), and
  • the selection strategies (e.g., elitist selection which never discards a best-so-far solution vs. the non-elitist Boltzmann strategies found in Simulated Annealing, SSWM, and the Metropolis algorithm).

One of the most relevant objectives in empirical and theoretical works on randomized optimization heuristics is to determine the best parameter settings for the above-described algorithmic components. Given the complex interactions between the parameter values, this parameter tuning task is a highly challenging one. It is further complicated by the observation that for most problems the optimal parameter settings change during the optimization process, thus asking for parameter control mechanisms that adjust the parameter value to the current state of the optimization. Identifying such reasonable (and possibly provably optimal) ways to update the parameter choices has been one of the intensively discussed topics of the seminar. Significant progress towards a better understanding of different parameter update schemes has been obtained in the last few years, as has been demonstrated by several talks, for example on self-adaptive and self-adjusting parameter updates as well as on estimation of distribution algorithms. Among other results, several connections to related questions in machine learning have been made, motivating the organizers to include machine learning as a focus topic of this seminar.

Randomized search heuristics are currently very popular in general machine learning in the form of Bayesian optimization. However, there has been little connection between the research in Bayesian optimization and the established work on randomized search heuristics, and the seminar was a step to change this. The first talk of the seminar was an extended introduction to Bayesian optimization by Matthew W. Hoffman from Google DeepMind, a leading expert in the field. The talk set the stage for informed discussions on similarities and differences between methods-and potential synergies between the research fields. Thompson sampling, an important algorithm in Bayesian optimization, was revisited in the talk by Jonathan Shapiro on dueling bandit problems, which demonstrated randomized search heuristics in a scenario of high commercial relevance. A common application of randomized search heuristics in general machine learning is model selection, for example finding a tailored structure of a neural network. This was addressed in the talk by Olivier Teytaud from Google Brain, who discussed model selection heuristics for large-scale machine learning systems. Randomized search heuristics are also successfully used for reinforcement learning. Arina Buzdalova presented work in which the connection is the other way round: ideas from reinforcement learning are used to improve randomized optimization (by controlling the choice of objectives).

Another intensively discussed topic, highly relevant in both discrete and continuous optimization, was constrained optimization. Here the main research questions concern the different ways to model constrained problems in black-box settings, and suitable algorithmic approaches. In addition to a number of theoretical results on constrained optimization, the need for a well-designed benchmark suite has also been discussed. As a result of one of the breakout sessions of the previous Dagstuhl Seminar 15211 on Theory of Evolutionary Computation, Dimo Brockhoff presented the recent extension of the COCO benchmark set (http://coco.gforge.inria.fr/doku.php) to constrained optimization. Dirk Arnold presented some work indicating that this extension of COCO is very timely, and much needed in the randomized search heuristics community. Furthermore, another breakout session has been held this year on the topic of constrained optimization, organized by Frank Neumann, with a focus on the different ways to model soft and hard constraints in discrete black-box optimization.

Organization

The seminar schedule has offered a good flexibility for the participants to propose talks and discussions of different lengths. 29 talks of 10--30 minutes each have been held in total, in the plenary room. These plenary talks were complemented by a introductory tutorial on Bayesian Optimization by Matt Hoffman on Monday morning and by 7 breakout sessions on various topics, including methodology-oriented discussions on the applicability of drift analysis in continuous domains or how to interpret the CMA-ES in the framework of information geometry optimization as well as problem-driven brainstorming on constrained optimization, the role of diversity in heuristic search, preference-based selection, and the method of estimation of distribution algorithms. Another breakout session was devoted to discussing the importance and possible obstacles of bringing theory-and practice-driven research in heuristic optimization closer together. The breakout sessions have been held on Tuesday, Wednesday, and Thursday afternoon, respectively, and have all witnessed high attendance rates. All talks and breakout sessions are summarized in Sections 3 and 4 of the present report.

We would like to express our gratitude to the Dagstuhl staff and all participants for making this Dagstuhl Seminar 17191 on Theory of Randomized Optimization Heuristics such a successful event, which has been a pleasure to organize.

Copyright Carola Doerr, Christian Igel, Lothar Thiele, and Xin Yao

Teilnehmer
  • Youhei Akimoto (Shinshu University - Nagano, JP) [dblp]
  • Dirk Arnold (Dalhousie University - Halifax, CA) [dblp]
  • Anne Auger (INRIA RandOpf Team, FR) [dblp]
  • Thomas Bäck (Leiden University, NL) [dblp]
  • Hans-Georg Beyer (Fachhochschule Vorarlberg - Dornbirn, AT) [dblp]
  • Dimo Brockhoff (INRIA Saclay - Île-de-France, FR) [dblp]
  • Maxim Buzdalov (ITMO University - St. Petersburg, RU) [dblp]
  • Arina Buzdalova (ITMO University - St. Petersburg, RU) [dblp]
  • Francisco Chicano (University of Málaga, ES) [dblp]
  • Benjamin Doerr (Ecole Polytechnique - Palaiseau, FR) [dblp]
  • Carola Doerr (CNRS & UPMC, Paris, FR) [dblp]
  • Anton V. Eremeev (Sobolev Institute of Mathematics - Novosibirsk, RU) [dblp]
  • Carlos M. Fonseca (University of Coimbra, PT) [dblp]
  • Tobias Friedrich (Hasso-Plattner-Institut - Potsdam, DE) [dblp]
  • Christian Gießen (Technical University of Denmark - Lyngby, DK) [dblp]
  • Tobias Glasmachers (Ruhr-Universität Bochum, DE) [dblp]
  • Nikolaus Hansen (INRIA Saclay - Île-de-France, FR) [dblp]
  • Matthew W. Hoffman (Google DeepMind - London, GB) [dblp]
  • Christian Igel (University of Copenhagen, DK) [dblp]
  • Thomas Jansen (Aberystwyth University, GB) [dblp]
  • Martin S. Krejca (Hasso-Plattner-Institut - Potsdam, DE) [dblp]
  • William B. Langdon (University College London, GB) [dblp]
  • Per Kristian Lehre (University of Birmingham, GB) [dblp]
  • Johannes Lengler (ETH Zürich, CH) [dblp]
  • Frank Neumann (University of Adelaide, AU) [dblp]
  • Pietro S. Oliveto (University of Sheffield, GB) [dblp]
  • Adam Prugel-Bennett (University of Southampton, GB) [dblp]
  • Jonathan Rowe (University of Birmingham, GB) [dblp]
  • Günter Rudolph (TU Dortmund, DE) [dblp]
  • Jonathan L. Shapiro (University of Manchester, GB) [dblp]
  • Ofer M. Shir (Tel-Hai College - Upper Galilee, IL) [dblp]
  • Dirk Sudholt (University of Sheffield, GB) [dblp]
  • Andrew M. Sutton (Hasso-Plattner-Institut - Potsdam, DE) [dblp]
  • Olivier Teytaud (Google Switzerland - Zürich, CH) [dblp]
  • Lothar Thiele (ETH Zürich, CH) [dblp]
  • Heike Trautmann (Universität Münster, DE) [dblp]
  • Carsten Witt (Technical University of Denmark - Lyngby, DK) [dblp]
  • Jing Yang (Ecole Polytechnique - Palaiseau, FR) [dblp]
  • Xin Yao (University of Birmingham, GB) [dblp]
  • Christine Zarges (Aberystwyth University, GB) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 00071: Theory of Evolutionary Algorithms (2000-02-13 - 2000-02-18) (Details)
  • Dagstuhl-Seminar 02031: Theory of Evolutionary Algorithms (2002-01-13 - 2002-01-18) (Details)
  • Dagstuhl-Seminar 04081: Theory of Evolutionary Algorithms (2004-02-15 - 2004-02-20) (Details)
  • Dagstuhl-Seminar 06061: Theory of Evolutionary Algorithms (2006-02-05 - 2006-02-10) (Details)
  • Dagstuhl-Seminar 08051: Theory of Evolutionary Algorithms (2008-01-27 - 2008-02-01) (Details)
  • Dagstuhl-Seminar 10361: Theory of Evolutionary Algorithms (2010-09-05 - 2010-09-10) (Details)
  • Dagstuhl-Seminar 13271: Theory of Evolutionary Algorithms (2013-06-30 - 2013-07-05) (Details)
  • Dagstuhl-Seminar 15211: Theory of Evolutionary Algorithms (2015-05-17 - 2015-05-22) (Details)
  • Dagstuhl-Seminar 19431: Theory of Randomized Optimization Heuristics (2019-10-20 - 2019-10-25) (Details)
  • Dagstuhl-Seminar 22081: Theory of Randomized Optimization Heuristics (2022-02-20 - 2022-02-25) (Details)
  • Dagstuhl-Seminar 24271: Theory of Randomized Optimization Heuristics (2024-06-30 - 2024-07-05) (Details)

Klassifikation
  • soft computing / evolutionary algorithms

Schlagworte
  • soft computing
  • evolutionary algorithms
  • machine learning
  • algorithms and complexity
  • optimization