Automatically detecting pain using facial actions - Robotics Institute Carnegie Mellon University

Automatically detecting pain using facial actions

P. Lucey, J. Cohn, S. Lucey, K. Prkachin, and S. Sridharan
Workshop Paper, ACII '09 Workshops, September, 2009

Abstract

Pain is generally measured by patient self-report, normally via verbal communication. However, if the patient is a child or has limited ability to communicate (i.e. the mute, mentally impaired, or patients having assisted breathing) self-report may not be a viable measurement. In addition, these self-report measures only relate to the maximum pain level experienced during a sequence so a frame-by-frame measure is currently not obtainable. Using image data from patients with rotator-cuff injuries, in this paper we describe an AAM-based automatic system which can detect pain on a frame-by-frame level. We do this two ways: directly (straight from the facial features); and indirectly (through the fusion of individual AU detectors). From our results, we show that the latter method achieves the optimal results as most discriminant features from each AU detector (i.e. shape or appearance) are used.

BibTeX

@workshop{Lucey-2009-121077,
author = {P. Lucey and J. Cohn and S. Lucey and K. Prkachin and S. Sridharan},
title = {Automatically detecting pain using facial actions},
booktitle = {Proceedings of ACII '09 Workshops},
year = {2009},
month = {September},
}