Non-Rigid Face Tracking Using Short Track-Life Features
Abstract
We define a “generic” non-rigid face tracker as any system that exhibits robustness to changes in illumination, expression and viewpoint during the tracking of facial land-marks in a video sequence. A popular approach to the problem is to detect/track an ensemble of local features over time whilst enforcing they conform to a global non-rigid shape prior. In general these approaches employ a strategy that assumes: (i) the feature points being tracked, ignoring occlusion, should roughly correspond across all frames, and (ii) that these feature points should correspond to the landmark points defining the non-rigid face shape model. In this paper, we challenge these two assumptions through the novel application of interest point detectors and descriptors (e.g. SIFT & SURF). We motivate this strategy by demonstrating empirically that salient features on the face for tracking on average only have a “track-life” of a few frames and rarely co-occur at the vertex points of the shape model. Due to the short track-life of these features we propose that new features should be detected at every frame rather than tracked from previous frames. By employing such a strategy we demonstrate that our proposed method has natural invariance to large discontinuous changes in motion. We additionally propose the employment of an online feature registration step that is able to rectify error accumulation and provides fast recovery from occlusion during tracking.
BibTeX
@conference{Lucey-2010-121073,author = {S. Lucey and J. Jang},
title = {Non-Rigid Face Tracking Using Short Track-Life Features},
booktitle = {Proceedings of International Conference on Digital Image Computing: Techniques and Applications (DICTA '10)},
year = {2010},
month = {December},
pages = {241 - 248},
}