Constraint Integration for Efficient Multiview Pose Estimation with Self-Occlusions - Robotics Institute Carnegie Mellon University

Constraint Integration for Efficient Multiview Pose Estimation with Self-Occlusions

Abhinav Gupta, Anurag Mittal, and Larry S. Davis
Journal Article, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 3, pp. 493 - 506, March, 2008

Abstract

Automatic initialization and tracking of human pose is an important task in visual surveillance. We present a part-based approach that incorporates a variety of constraints in a unified framework. These constraints include the kinematic constraints between parts that are physically connected to each other, the occlusion of one part by another and the high correlation between the appearance of certain parts, such as the arms. The location probability distribution of each part is determined by evaluating appropriate likelihood measures. The graphical (non-tree) structure representing the interdependencies between parts is utilized to “connect” such part distributions via nonparametric belief propagation. Methods are also developed to perform this optimization efficiently in the large space of pose configurations.

BibTeX

@article{Gupta-2008-113372,
author = {Abhinav Gupta and Anurag Mittal and Larry S. Davis},
title = {Constraint Integration for Efficient Multiview Pose Estimation with Self-Occlusions},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2008},
month = {March},
volume = {30},
number = {3},
pages = {493 - 506},
}