Selective Transfer Machine for Personalized Facial Expression Analysis
Abstract
Automatic facial action unit (AU) and expression detection from videos is a long-standing problem. The problem is challenging in part because classifiers must generalize to previously unknown subjects that differ markedly in behavior and facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) from those on which the classifiers are trained. While some progress has been achieved through improvements in choices of features and classifiers, the challenge occasioned by individual differences among people remains. Person-specific classifiers would be a possible solution but for a paucity of training data. Sufficient training data for person-specific classifiers typically is unavailable. This paper addresses the problem of how to personalize a generic classifier without additional labels from the test subject. We propose a transductive learning method, which we refer as a Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific mismatches. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. We compared STM to both generic classifiers and cross-domain learning methods on four benchmarks: CK+ [44], GEMEP-FERA [67], RU-FACS [4] and GFT [57]. STM outperformed generic classifiers in all.
BibTeX
@article{Chu-2017-5492,author = {Wen-Sheng Chu and Fernando De la Torre Frade and Jeffrey Cohn},
title = {Selective Transfer Machine for Personalized Facial Expression Analysis},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2017},
month = {March},
volume = {39},
number = {3},
pages = {529 - 545},
}