Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction - Robotics Institute Carnegie Mellon University

Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction

Wen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, and J. Andrew (Drew) Bagnell
Tech. Report, CMU-RI-TR-17-05, Robotics Institute, Carnegie Mellon University, March, 2017

Abstract

Researchers have demonstrated state-of-the-art performance in sequential decision making problems (e.g., robotics control, sequential prediction) with deep neural network models. One often has access to near-optimal oracles that achieve good performance on the task during training. We demonstrate that AggreVaTeD — a policy gradient extension of the Imitation Learning (IL) approach of (Ross & Bagnell, 2014) — can leverage such an oracle to achieve faster and better solutions with less training data than a less-informed Reinforcement Learning (RL) technique. Using both feedforward and recurrent neural network predictors, we present stochastic gradient procedures on a sequential prediction task, dependency-parsing from raw image data, as well as on various high dimensional robotics control problems. We also provide a comprehensive theoretical study of IL that demonstrates we can expect up to exponentially lower sample complexity for learning with AggreVaTeD than with RL algorithms, which backs our empirical findings. Our results and theory indicate that the proposed approach can achieve superior performance with respect to the oracle when the demonstrator is sub-optimal.

BibTeX

@techreport{Sun-2017-18131,
author = {Wen Sun and Arun Venkatraman and Geoffrey J. Gordon and Byron Boots and J. Andrew (Drew) Bagnell},
title = {Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction},
year = {2017},
month = {March},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-17-05},
keywords = {Imitation Learning, Sequential Prediction, Neural Network},
}