Automatically detecting pain in video through facial action units - Robotics Institute Carnegie Mellon University

Automatically detecting pain in video through facial action units

P. Lucey, J. F. Cohn, I. Matthews, S. Lucey, J. Howlett, and K. M. Prkachin
Journal Article, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 41, No. 3, pp. 664 - 674, June, 2011

Abstract

In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a
frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the
current-state-of-the-art approaches which utilize similarity-normalized appearance features only.

BibTeX

@article{Lucey-2011-120988,
author = {P. Lucey and J. F. Cohn and I. Matthews and S. Lucey and J. Howlett and K. M. Prkachin},
title = {Automatically detecting pain in video through facial action units},
journal = {IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)},
year = {2011},
month = {June},
volume = {41},
number = {3},
pages = {664 - 674},
}