Generalized Canonical Time Warping - Robotics Institute Carnegie Mellon University

Generalized Canonical Time Warping

F. Zhou and F. De la Torre
Journal Article, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 2, pp. 279 - 294, February, 2016

Abstract

Temporal alignment of human motion has been of recent interest due to its applications in animation, tele-rehabilitation and activity recognition. This paper presents generalized canonical time warping (GCTW), an extension of dynamic time warping (DTW) and canonical correlation analysis (CCA) for temporally aligning multi-modal sequences from multiple subjects performing similar activities. GCTW extends previous work on DTW and CCA in several ways: (1) it combines CCA with DTW to align multi-modal data (e.g., video and motion capture data); (2) it extends DTW by using a linear combination of monotonic functions to represent the warping path, providing a more flexible temporal warp. Unlike exact DTW, which has quadratic complexity, we propose a linear time algorithm to minimize GCTW. (3) GCTW allows simultaneous alignment of multiple sequences. Experimental results on aligning multi-modal data, facial expressions, motion capture data and video illustrate the benefits of GCTW. The code is available athttp://humansensing.cs.cmu.edu/ctw.

BibTeX

@article{Zhou-2016-120707,
author = {F. Zhou and F. De la Torre},
title = {Generalized Canonical Time Warping},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2016},
month = {February},
volume = {38},
number = {2},
pages = {279 - 294},
}