Loading Events

PhD Thesis Proposal

January

14
Thu
Benjamin Eckart Carnegie Mellon University
Thursday, January 14
3:00 pm to 12:00 am
Compact Generative Models of Point Cloud Data for 3D Perception

Event Location: GHC 8102

Abstract: One of the most fundamental tasks for any robotics application is the ability to adequately assimilate and respond to incoming sensor data. The goal of this thesis is to explore how statistical models for point cloud data can facilitate, accelerate, and unify many common tasks in the area of 3D perception. Our proposed work presents a unifying architecture for tasks such as geometric segmentation, registration, 3D visualization, and real-time mapping.

In the case of 3D range sensing, modern-day sensors generate massive quantities of point cloud data that strain available computational resources. Additionally, in complex systems, common perceptual processes often have completely separate data processing pipelines that deal with low-level processing in ad hoc ways. Our view is that low-level 3D point processing should be unified under a common architectural paradigm. To accomplish this, tractable data structures and models need to be established that can facilitate higher order perceptual operations by taking care of the processing elements common to each. Furthermore, these models should be able to be deployed in low-power embedded systems while retaining real-time performance.

We have established a family of compact generative models for point cloud data based on hierarchical Gaussian Mixture Models. Using recursive, data-parallel variants of the Expectation Maximization algorithm, we have been able to construct high fidelity statistical and hierarchical models, demonstrating state-of-the-art performance for geometric modeling, rigid registration, dynamic occupancy map creation, and 3D visualization. We have successfully deployed these algorithms both on low-power FPGAs and embedded GPUs.

We propose to continue this work by integrating our registration and hierarchical modeling techniques together into one system, thereby extending its applicability as a complete mapping subsystem for a continuously moving mobile robot using LIDAR. Specifically, we need to augment our rigid registration method for continuous range data acquisition, develop ways to sequentially fuse together previously established models, and encode known sensor intrinsics directly into the construction and registration of the models. The resulting system will serve as a computationally efficient and novel “black-box” approach for low-level 3D perception where basic processing elements are handled by a single family of data-parallel generative models. To demonstrate our results, we will provide a comprehensive evaluation of the system’s performance both on established datasets and on a real robot equipped with an embedded GPU and LIDAR.

Committee:Alonzo Kelly, Chair

Martial Hebert

Srinivasa Narasimhan

Jan Kautz, NVIDIA