Relative Entropy Policy Search
Conference Paper, Proceedings of 24th AAAI Conference on Artificial Intelligence (AAAI '10), pp. 1607 - 1612, July, 2010
Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.
BibTeX
@conference{Peters-2010-107896,author = {Peters, J. and Muelling, K. and Altun, Y.},
title = {Relative Entropy Policy Search},
booktitle = {Proceedings of 24th AAAI Conference on Artificial Intelligence (AAAI '10)},
year = {2010},
month = {July},
pages = {1607 - 1612},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.