Patch to the Future: Unsupervised Visual Prediction - Robotics Institute Carnegie Mellon University

Patch to the Future: Unsupervised Visual Prediction

Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 3302 - 3309, June, 2014

Abstract

In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling from a decision-theoretic framework. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances — how are appearances going to change with time. This yields a visual ”hallucination” of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events; We also show that our approach is comparable to supervised methods for event prediction.

BibTeX

@conference{Walker-2014-7843,
author = {Jacob Walker and Abhinav Gupta and Martial Hebert},
title = {Patch to the Future: Unsupervised Visual Prediction},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2014},
month = {June},
pages = {3302 - 3309},
}