Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild - Robotics Institute Carnegie Mellon University

Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild

Conference Paper, Proceedings of International Conference on 3D Vision (3DV '20), pp. 12 - 22, November, 2020

Abstract

The recovery of 3D shape and pose from 2D landmarks stemming from a large ensemble of images can be viewed as a non-rigid structure from motion (NRSfM) problem. Classical NRSfM approaches, however, are problematic as they rely on heuristic priors on the 3D structure (e.g. low rank) that do not scale well to large datasets. Learning-based methods are showing the potential to reconstruct a much broader set of 3D structures than classical methods - dramatically expanding the importance of NRSfM to a temporal unsupervised 2D to 3D lifting. Hitherto, these learning approaches have not been able to effectively model perspective cameras or handle missing/occluded points - limiting their applicability to in-the-wild datasets. In this paper, we present a generalized strategy for improving learning-based NRSfM methods [32] to tackle the above issues. Our approach, Deep NRSfM++, achieves state-of-the-art performance across numerous large-scale benchmarks, outperforming both classical and learning-based 2D-3D lifting methods.

BibTeX

@conference{Wang-2020-126712,
author = {Chaoyang Wang and Chen-Hsuan Lin and Simon Lucey},
title = {Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild},
booktitle = {Proceedings of International Conference on 3D Vision (3DV '20)},
year = {2020},
month = {November},
pages = {12 - 22},
}