Shuffle and Learn: Unsupervised Learning using Temporal Order Verification - Robotics Institute Carnegie Mellon University

Shuffle and Learn: Unsupervised Learning using Temporal Order Verification

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 527 - 544, October, 2016

Abstract

In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.

BibTeX

@conference{Misra-2016-5596,
author = {Ishan Misra and C. Lawrence Zitnick and Martial Hebert},
title = {Shuffle and Learn: Unsupervised Learning using Temporal Order Verification},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2016},
month = {October},
pages = {527 - 544},
}