Cross-domain AU Detection: Domains, Learning Approaches, and Measures - Robotics Institute Carnegie Mellon University

Cross-domain AU Detection: Domains, Learning Approaches, and Measures

Itir Onal Ertugrul, Jeffrey F. Cohn, Laszlo A. Jeni, Zheng Zhang, Lijun Yin, and Qiang Ji
Conference Paper, Proceedings of 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG '19), May, 2019

Abstract

Facial action unit (AU) detectors have performed well when trained and tested within the same domain. Do AU detectors transfer to new domains in which they have not been trained? To answer this question, we review literature on cross-domain transfer and conduct experiments to address limitations of prior research. We evaluate both deep and shallow approaches to AU detection (CNN and SVM, respectively) in two large, well-annotated, publicly available databases, Expanded BP4D+ and GFT. The databases differ in observational scenarios, participant characteristics, range of head pose, video resolution, and AU base rates. For both approaches and databases, performance decreased with change in domain, often to below the threshold needed for behavioral research. Decreases were not uniform, however. They were more pronounced for GFT than for Expanded BP4D+ and for shallow relative to deep learning. These findings suggest that more varied domains and deep learning approaches may be better suited for promoting generalizability. Until further improvement is realized, caution is warranted when applying AU classifiers from one domain to another.

BibTeX

@conference{Ertugrul-2019-119659,
author = {Itir Onal Ertugrul and Jeffrey F. Cohn and Laszlo A. Jeni and Zheng Zhang and Lijun Yin and Qiang Ji},
title = {Cross-domain AU Detection: Domains, Learning Approaches, and Measures},
booktitle = {Proceedings of 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG '19)},
year = {2019},
month = {May},
}