Agile Depth Sensing using Triangulation Light Curtains - Robotics Institute Carnegie Mellon University

Agile Depth Sensing using Triangulation Light Curtains

J. Bartels, J. Wang, W. Whittaker, and S. G. Narasimhan
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 7899 - 7907, October, 2019

Abstract

Depth sensors like LIDARs and Kinect use a fixed depth acquisition strategy that is independent of the scene of interest. Due to the low spatial and temporal resolution of these sensors, this strategy can undersample parts of the scene that are important (small or fast moving objects), or oversample areas that are not informative for the task at hand (a fixed planar wall). In this paper, we present an approach and system to dynamically and adaptively sample the depths of a scene using the principle of triangulation light curtains. The approach directly detects the presence or absence of objects at specified 3D lines. These 3D lines can be sampled sparsely, non-uniformly, or densely only at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. These results are achieved using a novel prototype light curtain system that is based on a 2D rolling shutter camera with higher light efficiency, working range, and faster adaptation than previous work, making it useful broadly for autonomous navigation and exploration.

BibTeX

@conference{Bartels-2019-120297,
author = {J. Bartels and J. Wang and W. Whittaker and S. G. Narasimhan},
title = {Agile Depth Sensing using Triangulation Light Curtains},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2019},
month = {October},
pages = {7899 - 7907},
}