Non-line-of-sight Imaging with Partial Occluders and Surface Normals
Abstract
Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could help improve navigation and mapping capabilities of autonomous vehicles or make search and rescue missions more effective. Time-resolved single-photon imaging systems have recently been demonstrated to record optical information of a scene that can lead to an estimation of the shape and reflectance of objects hidden from the line of sight of a camera. However, existing non-line-of-sight (NLOS) reconstruction algorithms have been constrained in the types of light transport effects they model for the hidden scene parts. We introduce a factored NLOS light transport representation that accounts for partial occlusions and surface normals. Based on this model, we develop a factorization approach for inverse time-resolved light transport anddemonstrate high-fidelity NLOS reconstructions for challenging scenes both in simulation and with an experimental NLOS imaging system.
The authors thank James Harris for fruitful discussions. D.B.L. is supported by a Stanford Graduate Fellowship in Science and Engineering. G.W. is supported by a Terman Faculty Fellowship and a Sloan Fellowship. Additional funding was generously provided by the National Science Foundation (CAREER Award IIS 1553333), the DARPA REVEAL program, the ARO (Grant W911NF-19-1-0120), the Center for Automotive Research at Stanford (CARS), and by the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant.
BibTeX
@article{Heide-2019-119937,author = {Felix Heide and Matthew O'Toole and Kai Zang and David B. Lindell and Steven Diamond and Gordon Wetzstein},
title = {Non-line-of-sight Imaging with Partial Occluders and Surface Normals},
journal = {ACM Transactions on Graphics (TOG)},
year = {2019},
month = {May},
volume = {38},
number = {3},
}