Model learning for look-ahead exploration in continuous control
Abstract
We propose an exploration method that incorporates lookahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies. Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, ie, the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.
BibTeX
@conference{Agarwal-2019-126622,author = {Arpit Agarwal and Katharina Muelling and Katerina Fragkiadaki},
title = {Model learning for look-ahead exploration in continuous control},
booktitle = {Proceedings of 33rd AAAI Conference on Artificial Intelligence (AAAI '19)},
year = {2019},
month = {July},
pages = {3151 - 3158},
}