Articulated Pose Estimation using Flexible Mixtures of Parts - Robotics Institute Carnegie Mellon University

Articulated Pose Estimation using Flexible Mixtures of Parts

Yi Yang and Deva Ramanan
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 1385 - 1392, June, 2011

Abstract

We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.

BibTeX

@conference{Yang-2011-121216,
author = {Yi Yang and Deva Ramanan},
title = {Articulated Pose Estimation using Flexible Mixtures of Parts},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2011},
month = {June},
pages = {1385 - 1392},
}