Deep Learning for Facial Action Unit Detection Under Large Head Poses - Robotics Institute Carnegie Mellon University

Deep Learning for Facial Action Unit Detection Under Large Head Poses

Zoltán Tősér, Laszlo A. Jeni, András Lőrincz, and Jeffrey F. Cohn
Workshop Paper, ECCV '16 Workshops: W09 – ChaLearn Looking at People Workshop on Apparent Personality Analysis and First Impressions Challenge, pp. 359 - 371, October, 2016

Abstract

Facial expression communicates emotion, intention, and physical state, and regulates interpersonal behavior. Automated face analysis (AFA) for the detection, synthesis, and understanding of facial expression is a vital focus of basic research with applications in behavioral science, mental and physical health and treatment, marketing, and human-robot interaction among other domains. In previous work, facial action unit (AU) detection becomes seriously degraded when head orientation exceeds 15∘ to 20∘. To achieve reliable AU detection over a wider range of head pose, we used 3D information to augment video data and a deep learning approach to feature selection and AU detection. Source video were from the BP4D database (n = 41) and the FERA test set of BP4D-extended (n = 20). Both consist of naturally occurring facial expression in response to a variety of emotion inductions. In augmented video, pose ranged between −18∘ and 90∘ for yaw and between −54∘ and 54∘ for pitch angles. Obtained results for action unit detection exceeded state-of-the-art, with as much as a 10 % increase in F1 measures.

BibTeX

@workshop{Toser-2016-119666,
author = {Zoltán Tősér and Laszlo A. Jeni and András Lőrincz and Jeffrey F. Cohn},
title = {Deep Learning for Facial Action Unit Detection Under Large Head Poses},
booktitle = {Proceedings of ECCV '16 Workshops: W09 – ChaLearn Looking at People Workshop on Apparent Personality Analysis and First Impressions Challenge},
year = {2016},
month = {October},
pages = {359 - 371},
}