Stacked Local Predictors for Large-Scale Point Cloud Classification - Robotics Institute Carnegie Mellon University

Stacked Local Predictors for Large-Scale Point Cloud Classification

B. Eckart and A. Kelly
Workshop Paper, CVPR '13 SUNw: Scene Understanding Workshop, June, 2013

Abstract

Many modern 3D range sensors, such as a Velodyne or Kinect, generate on the order of one million data points per second. For purposes of real-time semantic scene understanding, care must be made for efficient algorithms that not only run adequately fast, but also make the best use of the large amounts of data available. In this paper, I propose a novel point cloud classification scheme that 1) can be trained in a Map-Reduce framework and 2) allows real-time inference on commodity hardware. This method of training and classification makes the algorithm suitable for training on large amounts of labeled point cloud data and fast classification on new data during run-time. The algorithm works by segmenting feature space using random projections (Locality Sensitive Hashing) and training local classifiers. A separate contexual classifier is then run on neighbors in Euclidean space as a meta-learning procedure (stacking). The end result is a fast algorithm that outperforms the current state-of-the-art on a million point benchmark dataset.

BibTeX

@workshop{Eckart-2013-120746,
author = {B. Eckart and A. Kelly},
title = {Stacked Local Predictors for Large-Scale Point Cloud Classification},
booktitle = {Proceedings of CVPR '13 SUNw: Scene Understanding Workshop},
year = {2013},
month = {June},
}