Active Conditional Models - Robotics Institute Carnegie Mellon University

Active Conditional Models

Ying Chen and Fernando De la Torre
Conference Paper, Proceedings of IEEE International Conference on Automatic Face & Gesture Recognition (FG '11), pp. 137 - 142, March, 2011

Abstract

Matching images with large geometric and iconic changes (e.g. faces under different poses and facial expressions) is an open research problem in computer vision. There are two fundamental approaches to solve the correspondence problem in images: Feature-based matching and model-based matching. Feature-based matching relies on the assumption that features are stable across view-points and iconic changes, and it uses some unary, pair-wise or higher-order constraints as a measure of correspondence. On the other hand, model-based approaches such as Active Shape Models (ASMs) align appearance features with respect to a model. The model is learned from hand-labeled samples. However, model-based approaches typically suffer from lack of generalization to untrained situations. This paper proposes Active Conditional Models (ACM) that combines the benefits of both approaches. ACM learns the conditional relation (both in shape and appearance) between a reference view of the object and other view-points or iconic changes. The ACM model generalizes better to untrained situations, because it has less number of parameters (less prone to overfitting) and directly learns variations w.r.t a reference image (similar to feature-based methods). Several examples in the context of facial feature matching across pose and expression illustrate the benefits of ACMs.

BibTeX

@conference{Chen-2011-120924,
author = {Ying Chen and Fernando De la Torre},
title = {Active Conditional Models},
booktitle = {Proceedings of IEEE International Conference on Automatic Face & Gesture Recognition (FG '11)},
year = {2011},
month = {March},
pages = {137 - 142},
}