Data-Driven Model for Spontaneous Smiles
Abstract
We present a generative model for spontaneous smiles that preserves their dynamics and can thus be used to generate genuine animations. We use a high-resolution motion capture dataset of spontaneous smiles to represent the accurate temporal information present in spontaneous smiles. The smile model consists of data-driven interpolation functions generated from a Principal Component Analysis model and two blendshapes, neutral and peak. We augment the model for facial deformations with plausible, correlated head motions as observed in the data. The model was validated in two perceptual experiments that compared animations generated from the model, animations generated directly from motion capture data, and animations with traditional blendshape-based approaches with ease-in/ease-out interpolation functions. Animations with model interpolation functions were rated as more genuine than animations with ease-in/ease-out interpolation functions for different computergenerated characters. Our results suggest that data-driven interpolation functions accompanied by realistic head motions can be used by animators to generate more genuine smiles than animations with generic ease-in/ease-out interpolation functions.
BibTeX
@conference{Trutoiu-2015-119866,author = {Laura C. Trutoiu and Nancy Pollard and Jeffrey F. Cohn and Jessica K. Hodgins},
title = {Data-Driven Model for Spontaneous Smiles},
booktitle = {Proceedings of 28th International Conference on Computer Animation and Social Agents (CASA '15)},
year = {2015},
month = {May},
}