Parsing Occluded People - Robotics Institute Carnegie Mellon University

Parsing Occluded People

Golnaz Ghiasi, Yi Yang, Deva Ramanan, and Charless C. Fowlkes
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 2401 - 2408, June, 2014

Abstract

Occlusion poses a significant difficulty for object recognition due to the combinatorial diversity of possible occlusion patterns. We take a strongly supervised, non-parametric approach to modeling occlusion by learning deformable models with many local part mixture templates using large quantities of synthetically generated training data. This allows the model to learn the appearance of different occlusion patterns including figure-ground cues such as the shapes of occluding contours as well as the co-occurrence statistics of occlusion between neighboring parts. The underlying part mixture-structure also allows the model to capture coherence of object support masks between neighboring parts and make compelling predictions of figure-ground-occluder segmentations. We test the resulting model on human pose estimation under heavy occlusion and find it produces improved localization accuracy.

BibTeX

@conference{Ghiasi-2014-121194,
author = {Golnaz Ghiasi and Yi Yang and Deva Ramanan and Charless C. Fowlkes},
title = {Parsing Occluded People},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2014},
month = {June},
pages = {2401 - 2408},
}