FACS3D-Net: 3D convolution based spatiotemporal representation for action unit detection - Robotics Institute Carnegie Mellon University

FACS3D-Net: 3D convolution based spatiotemporal representation for action unit detection

Le Yang, Itir Onal Ertugrul, Jeffrey F. Cohn, Zakia Hammal, Dongmei Jiang, and Hichem Sahli
Conference Paper, Proceedings of 8th International Conference on Affective Computing and Intelligent Interaction (ACII '19), pp. 538 - 544, September, 2019

Abstract

Most approaches to automatic facial action unit (AU) detection consider only spatial information and ignore AU dynamics. For humans, dynamics improves AU perception. Is same true for algorithms? To make use of AU dynamics, recent work in automated AU detection has proposed a sequential spatiotemporal approach: Model spatial information using a 2D CNN and then model temporal information using LSTM (Long-Short-Term Memory). Inspired by the experience of human FACS coders, we hypothesized that combining spatial and temporal information simultaneously would yield more powerful AU detection. To achieve this, we propose FACS3D-Net that simultaneously integrates 3D and 2D CNN. Evaluation was on the Expanded BP4D+ database of 200 participants. FACS3D-Net outperformed both 2D CNN and 2D CNN-LSTM approaches. Visualizations of learnt representations suggest that FACS3D-Net is consistent with the spatiotemporal dynamics attended to by human FACS coders. To the best of our knowledge, this is the first work to apply 3D CNN to the problem of AU detection.

BibTeX

@conference{Yang-2019-120337,
author = {Le Yang and Itir Onal Ertugrul and Jeffrey F. Cohn and Zakia Hammal and Dongmei Jiang and Hichem Sahli},
title = {FACS3D-Net: 3D convolution based spatiotemporal representation for action unit detection},
booktitle = {Proceedings of 8th International Conference on Affective Computing and Intelligent Interaction (ACII '19)},
year = {2019},
month = {September},
pages = {538 - 544},
}