Face View Synthesis Across Large Angles - Robotics Institute Carnegie Mellon University

Face View Synthesis Across Large Angles

Workshop Paper, ICCV '05 2nd International Workshop on Analysis and Modeling of Faces and Gestures (AMFG '05), pp. 364 - 376, October, 2005

Abstract

Pose variations, especially large out-of-plane rotations, make face recognition a difficult problem. In this paper, we propose an algorithm that uses a single input image to accurately synthesize an image of the person in a different pose. We represent the two poses by stacking their information (pixels and feature locations) in a combined feature space. A given test vector will consist of a known part corresponding to the input image and a missing part corresponding to the synthesized image. We then solve for the missing part by maximizing the test vector's probability. This approach combines the ``distance-from-feature-space'' and ``distance-in-feature-space'', and maximizes the test vector's probability by minimizing a weighted sum of these two distances. Our approach does not require either 3D training data or a 3D model, and does not require correspondence between different poses. The algorithm is computationally efficient, and only takes 4 - 5 seconds to generate a face. Experimental results show that our approach produces more accurate results than the commonly used linear-object-class approach. Such technique can help face recognition to overcome the pose variation problem.

BibTeX

@workshop{Ni-2005-9320,
author = {Jiang Ni and Henry Schneiderman},
title = {Face View Synthesis Across Large Angles},
booktitle = {Proceedings of ICCV '05 2nd International Workshop on Analysis and Modeling of Faces and Gestures (AMFG '05)},
year = {2005},
month = {October},
pages = {364 - 376},
}