Multimodal Detection of Depression in The Context of Clinical Interview - Robotics Institute Carnegie Mellon University

Multimodal Detection of Depression in The Context of Clinical Interview

Hamdi Dibeklioglu, Zakia Hammal, Ying Yang, and Jeffrey F. Cohn
Conference Paper, Proceedings of 17th International Conference on Multimodal Interaction (ICMI '15), pp. 307 - 310, November, 2015

Abstract

Current methods for depression assessment depend almost entirely on clinical interview or self-report ratings. Such measures lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder. We compared a clinical interview of depression severity with automatic measurement in 48 participants undergoing treatment for depression. Interviews were obtained at 7-week intervals on up to four occasions. Following standard cut-offs, participants at each session were classified as remitted, intermediate, or depressed. Logistic regression classifiers using leave-one-out validation were compared for facial movement dynamics, head movement dynamics, and vocal prosody individually and in combination. Accuracy (remitted versus depressed) for facial movement dynamics was higher than that for head movement dynamics; and each was substantially higher than that for vocal prosody. Accuracy for all three modalities together reached 88.93 %, exceeding that for any single modality or pair of modalities. These findings suggest that automatic detection of depression from behavioral indicators is feasible and that multimodal measures afford most powerful detection.

BibTeX

@conference{Dibeklioglu-2015-120254,
author = {Hamdi Dibeklioglu and Zakia Hammal and Ying Yang and Jeffrey F. Cohn},
title = {Multimodal Detection of Depression in The Context of Clinical Interview},
booktitle = {Proceedings of 17th International Conference on Multimodal Interaction (ICMI '15)},
year = {2015},
month = {November},
pages = {307 - 310},
}