N-Best Maximal Decoders for Part Models - Robotics Institute Carnegie Mellon University

N-Best Maximal Decoders for Part Models

D. Park and D. Ramanan
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 2627 - 2634, November, 2011

Abstract

We describe a method for generating N-best configurations from part-based models, ensuring that they do not overlap according to some user-provided definition of overlap. We extend previous N-best algorithms from the speech community to incorporate non-maximal suppression cues, such that pixel-shifted copies of a single configuration are not returned. We use approximate algorithms that perform nearly identical to their exact counterparts, but are orders of magnitude faster. Our approach outperforms standard methods for generating multiple object configurations in an image. We use our method to generate multiple pose hypotheses for the problem of human pose estimation from video sequences. We present quantitative results that demonstrate that our framework significantly improves the accuracy of a state-of-the-art pose estimation algorithm.

BibTeX

@conference{Park-2011-121212,
author = {D. Park and D. Ramanan},
title = {N-Best Maximal Decoders for Part Models},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2011},
month = {November},
pages = {2627 - 2634},
}