Modeling and Controlling Light Transport for Scene Recovery and Rendering - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

September

15
Tue
Mohit Gupta Carnegie Mellon University
Tuesday, September 15
3:00 pm to 12:00 am
Modeling and Controlling Light Transport for Scene Recovery and Rendering

Event Location: Newell Simon Hall 1305

Abstract: Global illumination effects, such as inter-reflections, sub-surface scattering and volumetric scattering form an integral part of our daily visual experience. In fact, it is almost impossible to find a real world scene without any global illumination. Unfortunately, however, due to its complex nature, it is hard to build tractable computational models for global illumination. As a result, most active vision techniques assume that a scene point is illuminated only directly by the illumination source. Consequently, these techniques introduce a systematic bias in recovered shape for most real world scenes. Similarly, computer vision systems deployed in common poor visibility conditions such as murky water, bad weather, dust and smoke suffer due to scattering and attenuation of light. On the other hand, in graphics as well, most of the computer generated imagery today in video games, movies and scientific simulations are of scenes on clear days or nights. To achieve realism, it is important to simulate the visual effects resulting from global illumination in digital entertainment and scientific simulations. The goal of this thesis is to derive simple models of global illumination for a variety of scene recovery and rendering applications.


This thesis has four main contributions. First, we show that by actively controlling the illumination, we can recover scene properties even in the presence of global illumination. We have studied the interrelationship between global illumination and the depth cue of illumination defocus. Expressing both these effects as low pass filters enables accurate depth recovery without explicitly measuring light transport. Second, we have investigated how to illuminate the scene for minimizing the backscatter and maximizing useful signal in scattering media. Our framework can be used for improving visibility in a variety of outdoor applications, such as designing headlights for vehicles (terrestrial and underwater). Third, we have presented a simple device and technique for robustly estimating the optical properties of a broad class of participating media. We have constructed a database of the scattering properties of variety of media, which can be immediately used by the computer graphics community to render realistic images. Fourth, we have developed a unified reduced space framework for both fluid simulation and volumetric rendering. This can be used for fast rendering of complex visual effects involving dynamic and non-homogenous media, like snow, smoke, dust and fog.


Proposed work: We plan to study translucency for material recognition and rendering. We seek to understand human perception of translucency and build machine vision algorithms for material classification and translucency measurement. We want to devise techniques for separating inter-reflections from sub-surface scattering for better scene/material understanding. Finally, we have observed that placing an obstacle in front of an area light source creates a shadow effect which is similar to illumination defocus from a projector. Using sun as the area light source, we plan to port our depth recovery techniques to outdoor settings.

Committee:Srinivasa G. Narasimhan, Chair

Takeo Kanade

Martial Hebert

Shree K. Nayar, Columbia University