Light Sheet Depth Imaging - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

April

24
Wed
Joseph Bartels Robotics Institute,
Carnegie Mellon University
Wednesday, April 24
1:30 pm to 2:30 pm
NSH 3305
Light Sheet Depth Imaging

Abstract:
Once confined to industrial manufacturing facilities and research labs, robots are increasingly entering everyday life. As specialized robots are developed for tasks such as autonomous driving, package delivery, and aerial videography, there is a growing need for affordable depth sensing technology.

Robots use sensors like scanning LIDAR, depth cameras, and passive stereo cameras to navigate the world. Scanning LIDAR is prevalent and deemed necessary because it offers long-range sensing and is robust to bright ambient light, but it is expensive and captures sparse measurements due to point sampling of the scene. Consumer depth cameras, on the other hand, are inexpensive and produce dense, high-rate depth measurements, but since they emit light over the entire scene at once, they are susceptible to effects from global light transport and fail in bright ambient light. Passive stereo cameras are used for high-resolution sensing, but require substantial processing and, since they do not emit light onto the scene, fail in areas of low texture.

The goal of this thesis is to develop active illumination depth cameras and sensing methodologies that provide the robustness of scanning LIDAR with the speed, sampling density, and economy of consumer depth cameras. Rather than sample the entire scene at once like consumer depth cameras or with points like LIDAR, the key approach uses sheets of projected light and imaging to rapidly sample the scene along a single line at a time.

Using this approach, four contributions have been made. The first is a contribution to the development of a light sheet depth imaging device that applies the concept of epipolar imaging to continuous-wave time-of-flight cameras. The resulting device can see up to 15 m in bright sunlight and is robust to global illumination and camera motion. The next contribution developed a second generation of this camera that demonstrated sensing ranges up to 50 m. The third contribution uses the projected sheets of light and imaging to triangulate and sense along a 3D line. By sweeping this line through the volume with galvomirrors, a programmable light curtain is formed that detects objects along its surface at five frames per second. Finally, rapid imaging of programmable light curtains at 60 frames per second was enabled with the custom development of a prototype that uses the rolling shutter of a camera to steer the imaging plane instead of a galvomirror. The speed and selectivity provided by this device enables applications in agile depth sensing where scenes can be adaptively sampled based on detected regions of interest. This has the benefit of only sampling the scene where necessary rather than the entire the volume like traditional depth sensors and reduces unnecessary post-processing.

The research developed in this thesis contributes methods and hardware for high-resolution depth imaging that work in challenging conditions, provides methods for computationally inexpensive agile depth sensing, and has the economics that could enable next generation and wide-scale applications in mobile robotics, human-robot interaction, and industrial manufacturing.

More Information

Thesis Committee Members:
William Red Whittaker, Co-Chair
Srinivasa Narasimhan, Co-Chair
Simon Lucey
Matthew Johnson-Roberson, University of Michigan