Spatio-temporal Matching for Human Detection in Video
Abstract
Detection and tracking humans in videos have been long- standing problems in computer vision. Most successful approaches (e.g., deformable parts models) heavily rely on discriminative models to build appearance detectors for body joints and generative models to constrain possible body configurations (e.g., trees). While these 2D models have been successfully applied to images (and with less success to videos), a major challenge is to generalize these models to cope with camera views. In order to achieve view-invariance, these 2D models typically require a large amount of training data across views that is difficult to gather and time-consuming to label. Unlike existing 2D models, this paper for- mulates the problem of human detection in videos as spatio-temporal matching (STM) between a 3D motion capture model and trajectories in videos. Our algorithm estimates the camera view and selects a subset of tracked trajectories that matches the motion of the 3D model. The STM is efficiently solved with linear programming, and it is robust to tracking mismatches, occlusions and outliers. To the best of our knowl- edge this is the first paper that solves the correspondence between video and 3D motion capture data for human pose detection. Experiments on the Human3.6M and Berkeley MHAD databases illustrate the benefits of our method over state-of-the-art approaches.
BibTeX
@conference{Zhou-2014-7920,author = {Feng Zhou and Fernando De la Torre Frade},
title = {Spatio-temporal Matching for Human Detection in Video},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2014},
month = {September},
pages = {62 - 77},
}