Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors - Robotics Institute Carnegie Mellon University

Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors

Manfred Lau, Jinxiang Chai, Ying-Qing Xu, and Heung-Yeung Shum
Conference Paper, Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '07), pp. 161 - 170, August, 2007

Abstract

In this paper, we present an intuitive interface for interactively posing 3D facial expressions. The user can create and edit facial expressions by drawing freeform strokes, or by directly dragging facial points in 2D screen space. Designing such an interface for face modeling and editing is challenging because many unnatural facial expressions might be consistent with the ambiguous user input. The system automatically learns a model prior from a prerecorded facial expression database and uses it to remove the ambiguity. We formulate the problem in a maximum a posteriori (MAP) framework by combining the prior with user-defined constraints. Maximizing the posterior allows us to generate an optimal and natural facial expression that satisfies the user-defined constraints. Our system is interactive; it is also simple and easy to use. A first-time user can learn to use the system and start creating a variety of natural face models within minutes. We evaluate the performance of our approach with cross validation tests, and by comparing with alternative techniques.

Notes
Please see the Face Poser page for more information.

BibTeX

@conference{Lau-2007-9794,
author = {Manfred Lau and Jinxiang Chai and Ying-Qing Xu and Heung-Yeung Shum},
title = {Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors},
booktitle = {Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '07)},
year = {2007},
month = {August},
pages = {161 - 170},
}