Structure from Recurrent Motion: From Rigidity to Recurrency - Robotics Institute Carnegie Mellon University

Structure from Recurrent Motion: From Rigidity to Recurrency

Xiu Li, Hongdong Li, Hanbyul Joo, Yebin Liu, and Yaser Sheikh
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 3032 - 3040, June, 2018

Abstract

This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action. Departing from the traditional idea of using linear low-order or low-rank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is in fact a generalized rigidity. Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method. Since this paper offers a novel perspective on rethinking structure-from-motion, we hope it will inspire other new problems in the field.

BibTeX

@conference{Li-2018-122179,
author = {Xiu Li and Hongdong Li and Hanbyul Joo and Yebin Liu and Yaser Sheikh},
title = {Structure from Recurrent Motion: From Rigidity to Recurrency},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2018},
month = {June},
pages = {3032 - 3040},
}