TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 13321

Reinforcement Learning

( 04. Aug – 09. Aug, 2013 )


Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/13321

Organisatoren
  • Peter Auer (Montan-Universität Leoben, AT)
  • Marcus Hutter (Australian National University, AU)
  • Laurent Orseau (AgroParisTech - Paris, FR)



Programm

Motivation

Reinforcement Learning has become one of the main fields of Machine Learning and is continuing to grow in size, interest and results (it ranked first for the number of submissions and accepted papers at IMCL 2012, despite being co-located with EWRL 2012; see http://hunch.net/?p=2517). This Dagstuhl Seminar will address the most relevant current issues for scientists of Machine Learning and will also serve as the 11th European Workshop on Reinforcement Learning (EWRL 2013; see http://ewrl.wordpress.com/ewrl11-2013/). Taking advantage of the special format of Dagstuhl Seminars, the program will include presentations on the current state of Reinforcement Learning, explorations of new and revised research approaches, and numerous opportunities for open discussion. We expect a number of young researchers and scientists from industry to join us.

The aim of EWRL 2013 is to talk about the big questions: Where are we today? Where are we heading and where do we want to go? What should we do to get there? In keeping with this aim, the broad goal of this Dagstuhl Seminar is to give an overview of the current state of Reinforcement Learning, identify new trends and detect potential future trends. There have been recent meaningful advances in both the practical and theoretical aspects of RL: Increasingly bigger problems can be tackled, and increasingly better guarantees on RL algorithms are constantly being discovered; Algorithms are becoming more and more general and can tackle more and more complex problems; Deep Neural Networks architectures may have interesting impacts on RL, for empirical results, feature use and extraction, option learning or hierarchical RL; Universal RL gives a fresh view on agents and intelligence by means of RL; etcetera. With this in mind, the seminar will cover the usual topics and also a range of other topics that may include (but are not limited to):

  • RL theory
  • RL benchmark problems and empirical RL
  • Real-world applications of RL
  • Function approximation and kernel methods in RL
  • Features, options, hierarchical RL
  • Exploration vs Exploration Trade-off
  • Bayesian RL
  • Multi-Agent RL
  • Knowledge representation and injection in RL
  • Universal RL

The seminar will feature presentations by experts and other contributed talks with post-workshop proceedings, and will leave ample time for open discussions. Additionally, there will be one poster session. More information about the venue and the format of the proceedings will follow.

To help the organizers shape the seminar program, we invite every participant to submit an abstract of his or her proposed contribution, which might be a technical talk, an overview, or a comment on interesting problems or important RL points to be discussed in the context of either a talk or a poster. Selected abstracts will be considered for oral presentation or for the poster session, and will be included in the post-workshop proceedings. Abstracts should be submitted through EasyChair (https://www.easychair.org/conferences/?conf=ewrl11) between now and April 15, 2013.


Summary

Reinforcement Learning (RL) is becoming a very active field of machine learning, and this Dagstuhl Seminar aimed at helping researchers have a broad view of the current state of this field, exchange cross-topic ideas and present and discuss new trends in RL. It gathered 38 researchers together. Each day was more or less dedicated to one or a few topics, including in particular: The exploration/exploitation dilemma, function approximation and policy search, universal RL, partially observable Markov decision processes (POMDP), inverse RL and multi-objective RL.This year, by contrast to previous EWRL events, several small tutorials and overviews were presented. It appeared that researchers are nowadays interested in bringing RL to more general and more realistic settings, in particular by alleviating the Markovian assumption, for example so as to be applicable to robots and to a broader class of industrial applications.This trend is consistent with the observed growth of interest in policy search and universal RL. It may also explain why the traditional treatment of the exploration/exploitation dilemma received less attention than expected.

Copyright Peter Auer, Marcus Hutter, and Laurent Orseau

Teilnehmer
  • Peter Auer (Montan-Universität Leoben, AT) [dblp]
  • Manuel Blum (Universität Freiburg, DE) [dblp]
  • Robert Busa-Fekete (Universität Marburg, DE) [dblp]
  • Yann Chevaleyre (University of Paris North, FR) [dblp]
  • Marc Deisenroth (TU Darmstadt, DE) [dblp]
  • Thomas G. Dietterich (Oregon State University, US) [dblp]
  • Christos Dimitrakakis (EPFL - Lausanne, CH) [dblp]
  • Lutz Frommberger (Universität Bremen, DE) [dblp]
  • Jens Garstka (FernUniversität in Hagen, DE) [dblp]
  • Mohammad Ghavamzadeh (INRIA - University of Lille 1, FR) [dblp]
  • Marcus Hutter (Australian National University, AU) [dblp]
  • Rico Jonschkowski (TU Berlin, DE) [dblp]
  • Petar Kormushev (Italian Institute of Technology - Genova, IT) [dblp]
  • Tor Lattimore (Australian National University, AU) [dblp]
  • Alessandro Lazaric (INRIA - University of Lille 1, FR) [dblp]
  • Timothy Mann (Technion - Haifa, IL) [dblp]
  • Jan Hendrik Metzen (Universität Bremen, DE) [dblp]
  • Gergely Neu (Budapest University of Technology & Economics, HU) [dblp]
  • Gerhard Neumann (TU Darmstadt, DE) [dblp]
  • Ann Nowé (Free University of Brussels, BE) [dblp]
  • Laurent Orseau (AgroParisTech - Paris, FR) [dblp]
  • Ronald Ortner (Montan-Universität Leoben, AT) [dblp]
  • Joelle Pineau (McGill University - Montreal, CA) [dblp]
  • Doina Precup (McGill University - Montreal, CA) [dblp]
  • Mark B. Ring (IDSIA - Manno, CH) [dblp]
  • Manuela Ruiz-Montiel (University of Malaga, ES) [dblp]
  • Scott Sanner (NICTA - Canberra, AU) [dblp]
  • Nils T. Siebel (Hochschule für Technik und Wirtschaft - Berlin, DE) [dblp]
  • David Silver (University College London, GB) [dblp]
  • Orhan Sönmez (Bogaziçi University - Istanbul, TR) [dblp]
  • Peter Sunehag (Australian National University, AU) [dblp]
  • Richard S. Sutton (University of Alberta - Edmonton, CA) [dblp]
  • Csaba Szepesvári (University of Alberta - Edmonton, CA) [dblp]
  • William Uther (Google - Sydney, AU) [dblp]
  • Martijn van Otterlo (Radboud University Nijmegen, NL) [dblp]
  • Joel Veness (University of Alberta - Edmonton, CA) [dblp]
  • Jeremy L. Wyatt (University of Birmingham, GB) [dblp]

Klassifikation
  • artificial intelligence / robotics

Schlagworte
  • Artificial Intelligence
  • Machine Learning
  • Markov Decision Processes
  • Planning