Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequences - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

December

3
Thu
Jean-Francois Lalonde Ph.D. Student Robotics Institute
Thursday, December 3
2:00 pm to 3:00 pm
Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequences

Event Location: NSH 1305
Bio: Jean-Francois Lalonde received his B.E. in Computer Engineering with honors from Laval University, Canada, in 2004. He then went to Carnegie Mellon University to pursue his M.S. in Robotics under Martial Hebert, which he received in 2006. Since then, he has been pursuing a Ph. D. in Robotics under Alexei A. Efros and Srinivasa G. Narasimhan, and has recently been awarded a Microsoft Research Fellowship. His research interests are in computer vision and computer graphics, focusing on image understanding and synthesis by leveraging large amounts of data.

Abstract: Webcams placed all over the world observe and record the visual appearance of a variety of outdoor scenes over long periods of time. The recorded time-lapse image sequences cover a wide range of illumination and weather conditions — a vast untapped resource for creating visual realism. In this work, we propose to use a large repository of webcams as a “clip art” library from which users may transfer scene appearance (objects, scene backdrops, outdoor illumination) into their own time-lapse sequences or even single photographs. The goal is to combine the recent ideas from data-driven appearance transfer techniques with a general and theoretically-grounded physically-based illumination model. To accomplish this, the paper presents three main research contributions: 1) a new, high-quality outdoor webcam database that has been calibrated radiometrically and geometrically; 2) a novel approach for matching illuminations across different scenes based on the estimation of the properties of natural illuminants (sun, sky, weather and clouds), the camera geometry, and illumination-dependent scene features; 3) a new algorithm for generating physically plausible high dynamic range environment maps for each frame in a webcam sequence.