Agnostic System Identification for Model-Based Reinforcement Learning
Abstract
A fundamental problem in control is to learn a model of a system from observations that is useful for controller synthesis. To provide good performance guarantees, existing methods must assume that the real system is in the class of models considered during learning. We present an iterative method with strong guarantees even in the agnostic case where the system is not in the class. In particular, we show that any no-regret online learning algorithm can be used to obtain a near-optimal policy, provided some model achieves low training error and access to a good exploration distribution. Our approach applies to both discrete and continuous domains. We demonstrate its efficacy and scalability on a challenging helicopter domain from the literature
BibTeX
@conference{Ross-2012-7557,author = {Stephane Ross and J. Andrew (Drew) Bagnell},
title = {Agnostic System Identification for Model-Based Reinforcement Learning},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {2012},
month = {June},
pages = {1905 - 1912},
}