Detection, Tracking, and Classification of Action Units in Facial Expression - Robotics Institute Carnegie Mellon University

Detection, Tracking, and Classification of Action Units in Facial Expression

Jenn-Jier James Lien, Takeo Kanade, Jeffrey Cohn, and C. Li
Journal Article, Robotics and Autonomous Systems, Vol. 31, No. 3, pp. 131 - 146, May, 2000

Abstract

Most of the current work on automated facial expression analysis attempt to recognize a small set of prototypic expressions, such as joy and fear. Such prototypic expressions, however, occur infrequently, and human emotions and intentions are communicated more often by changes in one or two discrete features. To capture the full range of facial expression, detection, tracking, and classification of fine-grained changes in facial features are needed. We developed the first version of a computer vision system that is sensitive to subtle changes in the face. The system includes three modules to extract feature information: dense-flow extraction using a wavelet motion model, facial-feature tracking, and edge and line extraction. The feature information thus extracted is fed to discriminant classifiers or hidden Markov models that classify it into FACS action units, the descriptive system to code fine-grained changes in facial expression. The system was tested on image sequences from 100 male and female subjects of varied ethnicity. Agreement with manual FACS coding was strong for the results based on dense-flow extraction and facial-feature tracking, and strong to moderate for edge and line extraction.

BibTeX

@article{Lien-2000-14963,
author = {Jenn-Jier James Lien and Takeo Kanade and Jeffrey Cohn and C. Li},
title = {Detection, Tracking, and Classification of Action Units in Facial Expression},
journal = {Robotics and Autonomous Systems},
year = {2000},
month = {May},
volume = {31},
number = {3},
pages = {131 - 146},
}