SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images
Abstract
Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets. Recent efforts have turned to learning 3D reconstruction without 3D supervision from RGB images with annotated 2D silhouettes, dramatically reducing the cost and effort of annotation. These techniques, however, remain impractical as they still require multi-view annotations of the same object instance during training. As a result, most experimental efforts to date have been limited to synthetic datasets. In this paper, we address this issue and propose SDF-SRN, an approach that requires only a single view of objects at training time, offering greater utility for real-world scenarios. SDF-SRN learns implicit 3D shape representations to handle arbitrary shape topologies that may exist in the datasets. To this end, we derive a novel differentiable rendering formulation for learning signed distance functions (SDF) from 2D silhouettes. Our method outperforms the state of the art under challenging single-view supervision settings on both synthetic and real-world datasets.
BibTeX
@conference{Lin-2020-126713,author = {Chen-Hsuan Lin and Chaoyang Wang and Simon Lucey},
title = {SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2020},
month = {December},
}