Improving Multi-step Prediction of Learned Time Series Models - Robotics Institute Carnegie Mellon University

Improving Multi-step Prediction of Learned Time Series Models

Conference Paper, Proceedings of 29th AAAI Conference on Artificial Intelligence (AAAI '15), pp. 3024 - 3030, 2015

Abstract

Most typical statistical and machine learning approaches to time series modeling optimize a single-step prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d
assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multi-step time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DAD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction.

BibTeX

@conference{Venkatraman-2015-5902,
author = {Arun Venkatraman and Martial Hebert and J. Andrew (Drew) Bagnell},
title = {Improving Multi-step Prediction of Learned Time Series Models},
booktitle = {Proceedings of 29th AAAI Conference on Artificial Intelligence (AAAI '15)},
year = {2015},
month = {January},
pages = {3024 - 3030},
}