Person-Independent Facial Expression Detection using Constrained Local Models - Robotics Institute Carnegie Mellon University

Person-Independent Facial Expression Detection using Constrained Local Models

S. W. Chew, P. Lucey, S. Lucey, J. Saragih, J. F. Cohn, and S. Sridharan
Workshop Paper, FG '11 IEEE Workshop on Facial Expression Recognition and Analysis Challenge, pp. 915 - 920, March, 2011

Abstract

In automatic facial expression detection, very accurate registration is desired which can be achieved via a deformable model approach where a dense mesh of 60-70 points on the face is used, such as an active appearance model (AAM). However, for applications where manually labeling frames is prohibitive, AAMs do not work well as they do not generalize well to unseen subjects. As such, a more coarse approach is taken for person-independent facial expression detection, where just a couple of key features (such as face and eyes) are tracked using a Viola-Jones type approach. The tracked image is normally post-processed to encode for shift and illumination invariance using a linear bank of filters. Recently, it was shown that this preprocessing step is of no benefit when close to ideal registration has been obtained. In this paper, we present a system based on the Constrained Local Model (CLM) method which is a generic or person-independent face alignment algorithm which gains high accuracy. We show these results against the LBP feature extraction on the CK+ and GEMEP-FERA datasets.

BibTeX

@workshop{Chew-2011-121071,
author = {S. W. Chew and P. Lucey and S. Lucey and J. Saragih and J. F. Cohn and S. Sridharan},
title = {Person-Independent Facial Expression Detection using Constrained Local Models},
booktitle = {Proceedings of FG '11 IEEE Workshop on Facial Expression Recognition and Analysis Challenge},
year = {2011},
month = {March},
pages = {915 - 920},
}