Emphatic Visual Speech Synthesis
Abstract
The synthesis of talking heads has been a flourishing research area over the last few years. Since human beings have an uncanny ability to read people's faces, most related applications (e.g., advertising, video-teleconferencing) require absolutely realistic photometric and behavioral synthesis of faces. This paper proposes a person-specific facial synthesis framework that allows high realism and includes a novel way to control visual emphasis (e.g., level of exaggeration of visible articulatory movements of the vocal tract). There are three main contributions: a geodesic interpolation with visual unit selection, a parameterization of visual emphasis, and the design of minimum size corpora. Perceptual tests with human subjects reveal high realism properties, achieving similar perceptual scores as real samples. Furthermore, the visual emphasis level and two communication styles show a statistical interaction relationship.
BibTeX
@article{Melenchon-2009-120872,author = {J. Melenchon and F. De la Torre and E. Martinez and J. A. Montero},
title = {Emphatic Visual Speech Synthesis},
journal = {IEEE Transactions on Audio, Speech and Language Processing},
year = {2009},
month = {March},
volume = {17},
number = {3},
pages = {459 - 468},
}