Dense non-rigid motion capture from monocular video - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

June

19
Wed
Ravi Garv
Wednesday, June 19
3:00 pm to 4:00 pm
Dense non-rigid motion capture from monocular video

Event Location: NSH 1507
Bio: Ravi Garv is a final year PhD student at the School of Electronic Engineering and Computer Science, at Queen Mary University of London, working under the supervision of Dr. Lourdes Agapito, who holds an ERC Starting Grant. His work focuses on dense reconstruction of non-rigid surfaces and dynamic scenes.

Mr. Garv has obtained B.Tech. and M.Tech. in information and communication technology from the Indian Institute of Information Technology and Management Gwalior. Before starting his PhD, he has also worked in the PERCEPTION lab INRIA Alpes and at LITIS, INSA Rouen as a research intern.

His research interests include variational methods, structure from motion, segmentation, and video registration.

Abstract: Accurate recovery of dense 3D shape of deformable and articulated objects from monocular video sequences is a challenging computer vision problem with immense applicability to domains ranging from virtual reality, animation or motion re-targeting to image guided surgery.

Rigid scene capture is a mature field now and there exist algorithms to reconstruct indoor scenes using a single camera in real time. Multi-view geometry has also evolved to facilitate city scale reconstructions with reasonable accuracy. However, the rigidity assumption is too restrictive and interesting real scenes are often dynamic.

In this seminar I will present a method to reconstruct highly deforming smooth surfaces densely, using only a single video as input, without the need for any prior models or shape templates. I will focus on the well explored low rank prior for deformable shapes and propose its convex relaxation to introduce the first variational energy minimisation approach to non-rigid reconstruction.

I will argue for the importance of long range 2D trajectories for several vision problems and explain how subspace constraints can be used to exploit the redundancy present in the motion of real scenes for dense video registration.

I will also advocate the use of GPU-portable and scalable energy minimisation algorithms to progress towards practical dense non-rigid motion capture from single video in the presence of occlusions and illumination changes.

Finally, I will talk about our multiple model fitting framework for piecewise rigid scene modelling and show its application to dense multi-rigid reconstruction.