Detecting Depression Severity by Interpretable Representations of Motion Dynamics - Robotics Institute Carnegie Mellon University

Detecting Depression Severity by Interpretable Representations of Motion Dynamics

A. Kacem, Z. Hammal, M. Daoudi, and J. Cohn
Conference Paper, Proceedings of 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG '18), pp. 739 - 745, May, 2018

Abstract

Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.

BibTeX

@conference{Kacem-2018-120226,
author = {A. Kacem and Z. Hammal and M. Daoudi and J. Cohn},
title = {Detecting Depression Severity by Interpretable Representations of Motion Dynamics},
booktitle = {Proceedings of 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG '18)},
year = {2018},
month = {May},
pages = {739 - 745},
}