Loading Events

PhD Thesis Defense

October

22
Fri
Mohit Gupta Carnegie Mellon University
Friday, October 22
10:00 am to 12:00 am
Scene Recovery and Rendering Techniques Under Global Light Transport

Event Location: NSH 1305

Abstract: Light interacts with the world around us in complex ways. These interactions can broadly be classified as direct illumination – when a scene point is illuminated directly by the light source, or indirect illumination – when a scene point receives light that is reflected, refracted or scattered off other scene elements. Several computer vision techniques make the unrealistic assumption that scenes receive only direct illumination. In many real-world scenarios, such as indoors, underground caves, underwater, foggy conditions and for objects made of translucent materials like human tissue, fruits and flowers, the amount of indirect illumination is significant, often more than the direct illumination. In these scenarios, vision techniques that do not account for the indirect illumination result in strong and systematic errors in the recovered scene properties.


It has been necessary to make the above stated assumption because computational models for indirect illumination (also called global illumination or global light transport) rapidly grow in complexity, even for relatively simple scenes. The goal of this thesis is to build simple, tractable models of global light transport, which can be used for a variety of scene recovery and rendering applications. This encompasses three main research directions. First, recovering scene geometry and appearance despite the presence of global light transport. We show that the effects of global light transport can be removed for two different classes of shape recovery techniques – structured light triangulation and shape from projector defocus. We demonstrate our approaches on scenes with complex shapes and optically challenging materials. We then investigate the problem of recovering scene appearance in the presence of common poor visibility scenarios, such as murky water, bad weather, dust and smoke. Computer vision systems deployed in such conditions suffer due to scattering and attenuation of light. We show that by controlling the incident illumination, loss of image contrast due to scattering can be reduced. Our framework can be used for improving visibility in a variety of outdoor applications, such as designing headlights for vehicles, both terrestrial and underwater.


Global light transport is not always noise. In numerous scenarios, measuring global light transport can actually provide useful information about the scene. The second research direction is to recover material and scene properties by measuring global light transport. We present a simple device and technique for robustly measuring the volumetric scattering properties of a broad class of participating media. We have constructed a data-set of the scattering properties, which can be immediately used by the computer graphics community to render realistic images. Next, we model the effects of defocused illumination on the process of measuring global light transport in general scenes. Modeling the effects of defocus is important because projectors, having limited depth-of-field, are increasingly being used as programmable illumination in vision applications. With our techniques, we can separate the direct and global components of light transport for scenes whose depth-ranges are significantly greater than the depth of field of projectors (<0.3m).


The third direction in this thesis is fast rendering of dynamic and non-homogenous volumetric media, such as fog, smoke, and dust. Rendering such media requires simulating the fluid properties (density and velocity fields) and rendering volumetric scattering effects. Unfortunately, fluid simulation and volumetric rendering have always been treated as two disparate problems in computer graphics, making it hard to leverage the advances made in both fields together. In particular, reduced space methods have been developed separately for both fields, which exploit the observation that the associated fields (density, velocity and intensity) can be faithfully represented with a relatively small number of parameters. We develop a unified reduced space framework for both fluid simulation and volumetric rendering. Since both fluid simulation and volumetric rendering are done in a reduced space, our technique achieves computational speed-ups of one to three orders of magnitude over traditional spatial domain methods. We demonstrate complex visual effects resulting from volumetric scattering in dynamic and non-homogenous media, including fluid simulation effects such as particles inserted in turbulent wind-fields.

Committee:Srinivasa Narasimhan, Chair

Takeo Kanade

Martial Hebert

Shree K. Nayar, Columbia University