Non-Rigid Object Alignment with a Mismatch Template Based on Exhaustive Local Search - Robotics Institute Carnegie Mellon University

Non-Rigid Object Alignment with a Mismatch Template Based on Exhaustive Local Search

Yang Wang, Simon Lucey, and Jeffrey Cohn
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, October, 2007

Abstract

Non-rigid object alignment is especially challenging when only a single appearance template is available and target and template images fail to match. Two sources of discrepancy between target and template are changes in illumination and non-rigid motion. Because most existing methods rely on a holistic representation for the alignment process, they require multiple training images to capture appearance variance. We developed a patch-based method that requires only a single appearance template of the object. Specifically, we fit the patch-based face model to an unseen image using an exhaustive local search and constrain the local warp updates within a global warping space. Our approach is not limited to intensity values or gradients, and therefore offers a natural framework to integrate multiple local features, such as filter responses, to increase robustness to large initialization error, illumination changes and non-rigid deformations. This approach was evaluated experimentally on more than 100 subjects for multiple illumination conditions and facial expressions. In all the experiments, our patch-based method outperforms the holistic gradient descent method in terms of accuracy and robustness of feature alignment and image registration.

BibTeX

@conference{Wang-2007-9840,
author = {Yang Wang and Simon Lucey and Jeffrey Cohn},
title = {Non-Rigid Object Alignment with a Mismatch Template Based on Exhaustive Local Search},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2007},
month = {October},
keywords = {Non-Rigid Object Alignment, Exhaustive Local Search},
}