Dynamic Multimodal Measurement of Depression Severity Using Deep Autoencoding
Abstract
Depression is one of the most common psychiatric disorders worldwide, with over 350 million people affected. Current methods to screen for and assess depression depend almost entirely on clinical interviews and self-report scales. While useful, such measures lack objective, systematic, and efficient ways of incorporating behavioral observations that are strong indicators of depression presence and severity. Using dynamics of facial and head movement and vocalization, we trained classifiers to detect three levels of depression severity. Participants were a community sample diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at seven-week intervals over a period of 21 weeks. At each interview, they were scored by the HRSD as moderately to severely depressed, mildly depressed, or remitted. Logistic regression classifiers using leave-one-participant-out validation were compared for facial movement, head movement, and vocal prosody individually and in combination. Accuracy of depression severity measurement from facial movement dynamics was higher than that for head movement dynamics, and each was substantially higher than that for vocal prosody. Accuracy using all three modalities combined only marginally exceeded that of face and head combined. These findings suggest that automatic detection of depression severity from behavioral indicators in patients is feasible and that multimodal measures afford the most powerful detection.
BibTeX
@article{Dibeklioğlu-2018-120233,author = {H. Dibeklioğlu and Z. Hammal and J. F. Cohn},
title = {Dynamic Multimodal Measurement of Depression Severity Using Deep Autoencoding},
journal = {IEEE Journal of Biomedical and Health Informatics},
year = {2018},
month = {March},
volume = {22},
number = {2},
pages = {525 - 536},
}