Accelerated Generative Models for 3D Point Cloud Data
Abstract
Finding meaningful, structured representations of 3D point cloud data (PCD) has become a core task for spatial perception applications. In this paper we introduce a method for constructing compact generative representations of PCD at multiple levels of detail. As opposed to deterministic structures such as voxel grids or octrees, we propose probabilistic subdivisions of the data through local mixture modeling, and show how these subdivisions can provide a maximum likelihood segmentation of the data. The final representation is hierarchical, compact, parametric, and statistically derived, facilitating run-time occupancy calculations through stochastic sampling. Unlike traditional deterministic spatial subdivision methods, our technique enables dynamic creation of voxel grids according the application's best needs. In contrast to other generative models for PCD, we explicitly enforce sparsity among points and mixtures, a technique which we call expectation sparsification. This leads to a highly parallel hierarchical Expectation Maximization (EM) algorithm well-suited for the GPU and real-time execution. We explore the trade-offs between model fidelity and model size at various levels of detail, our tests showing favorable performance when compared to octree and NDT-based methods.
BibTeX
@conference{Eckart-2016-120736,author = {B. Eckart and K. Kim and A. Troccoli and A. Kelly and J. Kautz},
title = {Accelerated Generative Models for 3D Point Cloud Data},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2016},
month = {June},
pages = {5497 - 5505},
}