Modeling Illumination for Scene Recovery hrough the Motion, Occlusion and Strobing of Light-Sources - Robotics Institute Carnegie Mellon University

Modeling Illumination for Scene Recovery hrough the Motion, Occlusion and Strobing of Light-Sources

PhD Thesis, Tech. Report, CMU-RI-TR-09-37, Robotics Institute, Carnegie Mellon University, August, 2009

Abstract

Recent display applications for entertainment and business have made avail- able new types of illumination using LEDs, DMDs and LCDs, which are bright, energy efficient and cheap. Some of these devices are programmable and al- low spatio-temporal control of the emitted light rays. With the advent of such digital light-sources, illumination is becoming a flexible, configurable medium. This has impacted computer vision and spurred techniques that control illumi- nation for analysis of indoor areas, industrial environments, stage/studio sets, underwater scenes, underground locations and outdoor scenes at night. In such methods, the light-source’s programmability is often exploited to create easily detectable features, such as bright stripes or binary patterns. In this thesis, we extract illumination-based features for three new scenar- ios by exploiting the motion, occlusion and strobing of light-sources. First, we move a light-source in a smooth and random path. For a static scene, this cre- ates a continuous set of intensities at each pixel. We exploit this continuity to detect brightness maxima and minima at scene points and show how these ex- trema features relate to geometric cues, such as surface normals and depths. For our second approach, we occlude a light-source using moving opaque masks. Each pixel’s brightness minima, due to the mask shadow, corresponds to a set of blocked incident light rays. These shadow features can be used to render the scene from the light-source’s point-of-view. The third technique exploits the flickering emitted by a strobing source as a temporal feature that is easily detected, even for scenes with fast moving objects. This enables active vision for dynamic scenes, which we demonstrate using DLP illumination. Our methods work with a variety of indoor and outdoor materials, real- world textures and glossy/metallic objects. We are not restricted to distant point sources and show results with outdoor illumination, sources with in- tensity fall-off and area/line sources such as indoor fixtures. None of our ap- proaches require a complex or calibrated setup. Some even allow the light- source to be hand-waved in an unstructured manner. In addition, all of our techniques are easy to implement, requiring only a few lines of code. The work in this thesis demonstrates, for the first time, results such as iso-normal clustering of indoor and outdoor scenes, reciprocal views from general, non- programmable sources and very high-speed marker-less motion-capture and photography. Finally, almost all natural and artificial light-sources in our world either undergo motion (or are mobile), get occluded or exhibit strobing (at some frequency). Therefore our algorithms have relevance for real-world illumina- tion and any applications or extensions of these will have significant impact.

BibTeX

@phdthesis{Koppal-2009-10303,
author = {Sanjeev Jagannatha Koppal},
title = {Modeling Illumination for Scene Recovery hrough the Motion, Occlusion and Strobing of Light-Sources},
year = {2009},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-09-37},
}