Unmasking the Devil in the Details: What Works for Deep Facial Action Coding?
Abstract
The performance of automated facial expression coding has improving steadily as evidenced by results of the latest Facial Expression Recognition and Analysis (FERA 2017) Challenge. Advances in deep learning techniques have been key to this success. Yet the contribution of critical design choices remains largely unknown. Using the FERA 2017 database, we systematically evaluated design choices in pre-training, feature alignment, model size selection, and optimizer details. Our findings vary from the counter-intuitive (e.g., generic pre-training outperformed face-specific models) to best practices in tuning optimizers. Informed by what we found, we developed an architecture that exceeded state-of-the-art on FERA 2017. We achieved a 3.5% increase in F1 score for occurrence detection and a 5.8% increase in ICC for intensity estimation.
BibTeX
@conference{Niinuma-2019-119656,author = {Koichiro Niinuma and Laszlo A. Jeni and Itir Onal Ertugrul and Jeffrey F. Cohn},
title = {Unmasking the Devil in the Details: What Works for Deep Facial Action Coding?},
booktitle = {Proceedings of British Machine Vision Conference (BMVC '19)},
year = {2019},
month = {September},
}