(De)-Focusing on Global Illumination for Active Scene Recovery - Robotics Institute Carnegie Mellon University

(De)-Focusing on Global Illumination for Active Scene Recovery

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 2969 - 2976, June, 2009

Abstract

Most active scene recovery techniques assume that a scene point is illuminated only directly by the illumination source. Consequently, global illumination effects due to inter-reflections, sub-surface scattering and volumetric scattering introduce strong biases in the recovered scene shape. Our goal is to recover scene properties in the presence of global illumination. To this end, we study the interplay between global illumination and the depth cue of illumination defocus. By expressing both these effects as low pass filters, we derive an approximate invariant that can be used to separate them without explicitly modeling the light transport. This is directly useful in any scenario where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show two applications: (a) accurate depth recovery in the presence of global illumination, and (b) factoring out the effects of defocus for correct direct-global separation in large depth scenes. We demonstrate our approach using scenes with complex shapes, reflectances, textures and translucencies.

BibTeX

@conference{Gupta-2009-120335,
author = {M. Gupta and Y. Tian and S. G. Narasimhan and L. Zhang},
title = {(De)-Focusing on Global Illumination for Active Scene Recovery},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2009},
month = {June},
pages = {2969 - 2976},
}