Facial Action Transfer with Personalized Bilinear Regression - Robotics Institute Carnegie Mellon University

Facial Action Transfer with Personalized Bilinear Regression

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 144 - 158, October, 2012

Abstract

Facial Action Transfer (FAT) has recently attracted much attention in computer vision due to its diverse applications in the movie industry, computer games, and privacy protection. The goal of FAT is to “clone” the facial actions from the videos of one person (source) to another person (target). In this paper, we will assume that we have a video of the source person but only one frontal image of the target person. Most successful methods for FAT require a training set with annotated correspondence between expressions of different subjects, sometimes including many images of the target subject. However, labeling expressions is time consuming and error prone (i.e., it is difficult to capture the same intensity of the expression across people). Moreover, in many applications it is not realistic to have many labeled images of the target. This paper proposes a method to learn a personalized facial model, that can produce photo-realistic person-specific facial actions (e.g., synthesize wrinkles for smiling), from only a neutral image of the target person. More importantly, our learning method does not need an explicit correspondence of expressions across subjects. Experiments on the Cohn-Kanade and the RU-FACS databases show the effectiveness of our approach to generate video-realistic images of the target person driven by spontaneous facial actions of the source. Moreover, we illustrate applications of FAT to face de-identification.

BibTeX

@conference{Huang-2012-120906,
author = {D. Huang and F. De la Torre},
title = {Facial Action Transfer with Personalized Bilinear Regression},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2012},
month = {October},
pages = {144 - 158},
}