Diffraction Line Imaging
Abstract
We present a novel computational imaging principle that combines diffractive optics with line (1D) sensing. When light passes through a diffraction grating, it disperses as a function of wavelength. We exploit this principle to recover 2D and even 3D positions from only line images. We derive a detailed image formation model and a learning-based algorithm for 2D position estimation. We show several extensions of our system to improve the accuracy of the 2D positioning and expand the effective field of view. We demonstrate our approach in two applications: (a) fast passive imaging of sparse light sources like street lamps, headlights at night and LED-based motion capture, and (b) structured light 3D scanning with line illumination and line sensing. Line imaging has several advantages over 2D sensors: high frame rate, high dynamic range, high fill-factor with additional on-chip computation, low cost beyond the visible spectrum, and high energy efficiency when used with line illumination. Thus, our system is able to achieve high-speed and high-accuracy 2D positioning of light sources and 3D scanning of scenes.
We thank A. Sankaranarayanan and V. Saragadam for help with building the hardware prototype and S. Panev and F. Moreno for neural network-related advice. We were supported in parts by NSF Grants IIS-1900821 and CCF-1730147 and DARPA REVEAL Contract HR0011-16-C-0025.
BibTeX
@conference{Sheinin-2020-123957,author = {Mark Sheinin, Dinesh N. Reddy, Matthew O'Toole and Srinivasa Narasimhan},
title = {Diffraction Line Imaging},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2020},
month = {August},
pages = {1 - 16},
}