https://www.dagstuhl.de/13321
August 4 – 9 , 2013, Dagstuhl Seminar 13321
Reinforcement Learning
Organizers
Peter Auer (Montan-Universität Leoben, AT)
Marcus Hutter (Australian National University, AU)
Laurent Orseau (AgroParisTech – Paris, FR)
For support, please contact
Documents
Dagstuhl Report, Volume 3, Issue 8
Aims & Scope
List of Participants
Dagstuhl Seminar Schedule [pdf]
Summary
Reinforcement Learning (RL) is becoming a very active field of machine learning, and this Dagstuhl Seminar aimed at helping researchers have a broad view of the current state of this field, exchange cross-topic ideas and present and discuss new trends in RL. It gathered 38 researchers together. Each day was more or less dedicated to one or a few topics, including in particular: The exploration/exploitation dilemma, function approximation and policy search, universal RL, partially observable Markov decision processes (POMDP), inverse RL and multi-objective RL.This year, by contrast to previous EWRL events, several small tutorials and overviews were presented. It appeared that researchers are nowadays interested in bringing RL to more general and more realistic settings, in particular by alleviating the Markovian assumption, for example so as to be applicable to robots and to a broader class of industrial applications.This trend is consistent with the observed growth of interest in policy search and universal RL. It may also explain why the traditional treatment of the exploration/exploitation dilemma received less attention than expected.


Classification
- Artificial Intelligence / Robotics
Keywords
- Artificial Intelligence
- Machine Learning
- Markov Decision Processes
- Planning