5:00 pm to 12:00 am
Event Location: NSH 3002
Abstract: Human observers are particularly adept at detecting anomalies in realistic computer-generated (CG) facial animations. With an increased demand for CG characters in education and entertainment applications, it is important to animate accurate, realistic facial expressions. In this thesis proposal, we develop a framework to explore representations of two key facial expressions: blinks and smiles. We argue that data-driven models of facial deformations (both space and time) are needed to create realistic animations. We start by recording large collections of high resolution dynamic expressions through video and motion capture technology. We then build expression specific models of the temporal and spatial dynamic properties of the data. Finally, we assess whether the models are perceived as more natural than the simplified models present in the literature.
In the first project, we build a generative model of the characteristic dynamics blinks: fast closing of the eyelids followed by a slow opening. In the second project, we propose to build models for a wide range of smile expressions that have different perceptual meanings. In the last project, we investigate how blinks synchronize with the start and end of spontaneous smiles. The timing of blinks relative to smiles may be relevant to creating expressive animations and facilitating communication with avatars. Our work is directly applicable to current methods in animation. For example, we illustrate how our models can be used in the popular framework of blendshape animation to increase realism while keeping the complexity of the system low.
Committee:Jessica K. Hodgins, Chair
Nancy Pollard
Jeffrey F. Cohn
Carol O’Sullivan, Trinity College Dublin