A framework for modeling the appearance of 3D articulated figures
Abstract
This paper describes a framework for constructing a linear subspace model of image appearance for complex articulated 3D figures such as humans and other animals. A commercial motion capture system provides 3D data that is aligned with images of subjects performing various activities. Portions of a limb's image appearance are seen from multiple views and for multiple subjects. From these partial views, weighted principal component analysis is used to construct a linear subspace representation of the "unwrapped" image appearance of each limb. The linear subspaces provide a generative model of the object appearance that is exploited in a Bayesian particle filtering tracking system. Results of tracking single limbs and walking humans are presented.
BibTeX
@conference{Sidenbladh-2000-120964,author = {H. Sidenbladh and F. De la Torre and M. J. Black},
title = {A framework for modeling the appearance of 3D articulated figures},
booktitle = {Proceedings of 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG '00)},
year = {2000},
month = {March},
pages = {368 - 375},
}