Automatic AU detection in infants using convolutional neural network
Abstract
Action unit detection in infants relative to adults presents unique challenges. Jaw contour is less distinct, facial texture is reduced, and rapid and unusual facial movements are common. To detect facial action units in spontaneous behavior of infants, we propose a multi-label Convolutional Neural Network (CNN). Eighty-six infants were recorded during tasks intended to elicit enjoyment and frustration. Using an extension of FACS for infants (Baby FACS), over 230,000 frames were manually coded for ground truth. To control for chance agreement, inter-observer agreement between Baby-FACS coders was quantified using free-margin kappa. Kappa coefficients ranged from 0.79 to 0.93, which represents high agreement. The multi-label CNN achieved comparable agreement with manual coding. Kappa ranged from 0.69 to 0.93. Importantly, the CNN-based AU detection revealed the same change in findings with respect to infant expressiveness between tasks. While further research is needed, these findings suggest that automatic AU detection in infants is a viable alternative to manual coding of infant facial expression.
BibTeX
@conference{Hammal-2017-120230,author = {Zakia Hammal and Wen-Sheng Chu and Jeffrey F. Cohn and Carrie L. Heike and Matthew L. Speltz},
title = {Automatic AU detection in infants using convolutional neural network},
booktitle = {Proceedings of 7th International Conference on Affective Computing and Intelligent Interaction (ACII '17)},
year = {2017},
month = {October},
pages = {216 - 221},
}