Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction - Robotics Institute Carnegie Mellon University

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

C. Lin, C. Kong, and S. Lucey
Conference Paper, Proceedings of 32nd AAAI Conference on Artificial Intelligence (AAAI '18), February, 2018

Abstract

Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.

BibTeX

@conference{Lin-2018-121029,
author = {C. Lin and C. Kong and S. Lucey},
title = {Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction},
booktitle = {Proceedings of 32nd AAAI Conference on Artificial Intelligence (AAAI '18)},
year = {2018},
month = {February},
}