Unsupervised Discovery of Facial Events
Abstract
Automatic facial image analysis has been a long standing research problem in computer vision. A key component in facial image analysis, largely conditioning the success of subsequent algorithms (e.g., facial expression recognition), is to define a vocabulary of possible dynamic facial events. To date, that vocabulary has come from the anatomically-based Facial Action Coding System (FACS) or more subjective approaches (i.e., emotion-specified expressions). The aim of this paper is to discover facial events directly from video of naturally occurring facial behavior, without recourse to FACS or other labeling schemes. To discover facial events, we propose a novel temporal clustering algorithm, Aligned Cluster Analysis (ACA), and a multi-subject correspondence algorithm for matching expressions. We use a variety of video sources: posed facial behavior (Cohn-Kanade database), unscripted facial behavior (RU-FACS database) and some video in infants. ACA achieved moderate intersystem agreement with manual FACS coding and proved informative as a visualization/summarization tool.
BibTeX
@conference{Zhou-2010-10483,author = {Feng Zhou and Fernando De la Torre Frade and Jeffrey Cohn},
title = {Unsupervised Discovery of Facial Events},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2010},
month = {June},
pages = {2574 - 2581},
}