Action Unit detection with Segment-based SVMs
Abstract
Automatic facial action unit (AU) detection from video is a long-standing problem in computer vision. Two main approaches have been pursued: (1) static modeling - typically posed as a discriminative classification problem in which each video frame is evaluated independently; (2) temporal modeling - frames are segmented into sequences and typically modeled with a variant of dynamic Bayesian networks. We propose a segment-based approach, kSeg-SVM, that incorporates benefits of both approaches and avoids their limitations. kSeg-SVM is a temporal extension of the spatial bag-of-words. kSeg-SVM is trained within a structured output SVM framework that formulates AU detection as a problem of detecting temporal events in a time series of visual features. Each segment is modeled by a variant of the BoW representation with soft assignment of the words based on similarity. Our framework has several benefits for AU detection: (1) both dependencies between features and the length of action units are modeled; (2) all possible segments of the video may be used for training; and (3) no assumptions are required about the underlying structure of the action unit events (e.g., i.i.d.). Our algorithm finds the best k-or-fewer segments that maximize the SVM score. Experimental results suggest that the proposed method outperforms state-of-the-art static methods for AU detection.
BibTeX
@conference{Simon-2010-120945,author = {Tomas Simon and Minh Hoai Nguyen and Fernando De la Torre and Jeffrey F. Cohn},
title = {Action Unit detection with Segment-based SVMs},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2010},
month = {June},
pages = {2737 - 2744},
}