Carnegie Mellon University
1:00 pm to 2:00 pm
Abstract:
Traditionally, computer vision systems and algorithms, such as stereo vision, and shape from shading, have been developed to mimic human vision. As a consequence, a lot of these systems operate under constraints that we take for granted in human vision. An example of such a constraint is that the scene of interest must be directly visible. This is a reasonable assumption, given how humans visually sense their surroundings—by directly looking at them. We refer to this as “line-of-sight (LOS) imaging”. Now imagine having access to imaging systems that can see things that are not directly visible to them, such as foreseeing what is on the other side of the corridor without actually being there. We call this type of sensing “non-line-of-sight (NLOS) imaging”. Because of the limitations of the human eye as a camera, NLOS imaging is a type of sensing well-outside the capabilities of the human visual system.
In this talk, I will first give an overview of NLOS imaging. Next, I will talk about how we capture transient measurements, which provide not just intensity information, but also temporal information that are useful for high-resolution reconstructions. I will then present our proposed framework for NLOS shape reconstruction based on a theory that we call Fermat path, which is named after Fermat’s principle, a general physical property that forms the foundation of geometric optics. Based on this theory, I will present an algorithm, called Fermat Flow, for reconstructing the shape of NLOS objects.
Our method is purely geometric, in contrast with previous NLOS imaging techniques that perform reconstruction by solving inverse radiometric problems. This provides a few advantages: First, it enables reconstructing very fine details of NLOS objects. Second, it allows handling NLOS objects of all kinds of materials, including completely specular, and translucent objects which are considered hard to reconstruct even when they are directly visible. We performed experiments with two time-of-flight imaging systems, one based on single-photon avalanche diodes (SPADs) with a temporal resolution of a few picoseconds, and another based on optical coherence tomography with a temporal resolution of a few femtoseconds. We used measurements from these systems to reconstruct hidden table-top objects with an accuracy of a few millimeters, and a US quarter at an accuracy of a few micrometers. Our method allowed, for the first time, to recover shapes of arbitrary NLOS objects at the same accuracy as if the objects were directly visible to the camera.
Committee:
Srinivasa Narasimhan, Co-advisor
Ioannis Gkioulekas, Co-advisor
Martial Hebert
Chao Liu