Automated Facial Expression Recognition Based on FACS Action Units - Robotics Institute Carnegie Mellon University

Automated Facial Expression Recognition Based on FACS Action Units

Jenn-Jier James Lien, Takeo Kanade, Jeffrey Cohn, and Ching-Chung Li
Conference Paper, Proceedings of 3rd IEEE International Conference on Automatic Face & Gesture Recognition (FG '98), pp. 390 - 395, April, 1998

Abstract

Automated recognition of facial expression is an important addition to computer vision research because of its relevance to the study of psychological phenomena and the development of human-computer interaction (HCI). We developed a computer vision system that automatically recognizes individual action units or action unit combinations in the upper face using hidden Markov models (HMMs). Our approach to facial expression recognition is based an the Facial Action Coding System (FACS), which separates expressions into upper and lower face action. We use three approaches to extract facial expression information: (1) facial feature point tracking; (2) dense flow tracking with principal component analysis (PCA); and (3) high gradient component detection (i.e. furrow detection). The recognition results of the upper face expressions using feature point tracking, dense flow tracking, and high gradient component detection are 85%, 93% and 85%, respectively.

BibTeX

@conference{Lien-1998-14611,
author = {Jenn-Jier James Lien and Takeo Kanade and Jeffrey Cohn and Ching-Chung Li},
title = {Automated Facial Expression Recognition Based on FACS Action Units},
booktitle = {Proceedings of 3rd IEEE International Conference on Automatic Face & Gesture Recognition (FG '98)},
year = {1998},
month = {April},
pages = {390 - 395},
}