Shape Prior Meets Geometry in Single and Multi-view Shape Reconstruction
Abstract
Reconstruction of 3D shape and pose of an object from a single image or multi-view frames have long been problems of interest in computer vision. Hitherto, the problems have been addressed with classical methods of silhouette or key point matching (for single image), and structure from motion through projected point displacement & bundle adjustment (for multi-view reconstruction). With emerging deep learning techniques, efforts are made to regress from the image or sequence directly to the 3D shape and pose labels, by learning shape-prior in a data-driven fashion. There is, however, a significant gap between these two strategies. Traditional methods tackle the problems from purely a geometric perspective, taking no account of the object shape prior. Modern deep methods more often throw away geometric constraints altogether, rendering the results unreliable. In this thesis we make an effort to bring these two seemingly disparate strategies together. For shape-from-single-image, we define the new task of pose-aware shape reconstruction, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized as weak-constraint. For multi-view reconstruction, we introduce learned shape prior in the form of deep shape generator into Photometric Bundle Adjustment (PBA) and propose to accommodate full 3D shape generated by the shape prior within the optimization-based inference framework, demonstrating impressive results.
BibTeX
@mastersthesis{Zhu-2017-27062,author = {Rui Zhu},
title = {Shape Prior Meets Geometry in Single and Multi-view Shape Reconstruction},
year = {2017},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-17-54},
keywords = {Shape Reconstruction, Shape Prior, Reprojection, Photometric Bundle Adjustment},
}