Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 263 - 270, November, 1992
Abstract
We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time performance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of statespace. We compare Prioritized Sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real time problems with which other methods have difficulty.
BibTeX
@conference{Moore-1992-15876,author = {Andrew Moore and C. G. Atkeson},
title = {Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {1992},
month = {November},
editor = {S. J. Hanson, J. D Cowan, and C. L. Giles},
pages = {263 - 270},
publisher = {Morgan Kaufmann},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.