Dual Structured light 3D using 1D sensor - Robotics Institute Carnegie Mellon University

Dual Structured light 3D using 1D sensor

J. Wang, A. Sankaranarayanan, M. Gupta, and S. G. Narasimhan
Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 383 - 398, October, 2016

Abstract

Structured light-based 3D reconstruction methods often illuminate a scene using patterns with 1D translational symmetry such as stripes, Gray codes or sinusoidal phase shifting patterns. These patterns are decoded using images captured by a traditional 2D sensor. In this work, we present a novel structured light approach that uses a 1D sensor with simple optics and no moving parts to reconstruct scenes with the same acquisition speed as a traditional 2D sensor. While traditional methods compute correspondences between columns of the projector and 2D camera pixels, our ‘dual’ approach computes correspondences between columns of the 1D camera and 2D projector pixels. The use of a 1D sensor provides significant advantages in many applications that operate in short-wave infrared range (0.9–2.5 microns) or require dynamic vision sensors (DVS), where a 2D sensor is prohibitively expensive and difficult to manufacture. We analyze the proposed design, explore hardware alternatives and discuss the performance in the presence of ambient light and global illumination.

BibTeX

@conference{Wang-2016-120311,
author = {J. Wang and A. Sankaranarayanan and M. Gupta and S. G. Narasimhan},
title = {Dual Structured light 3D using 1D sensor},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2016},
month = {October},
pages = {383 - 398},
}