Compensating for Motion During Direct-Global Separation - Robotics Institute Carnegie Mellon University

Compensating for Motion During Direct-Global Separation

Supreeth Achar, Stephen T. Nuske, and Srinivasa G. Narasimhan
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 1481-1488, December, 2013

Abstract

Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to be performed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

BibTeX

@conference{Achar-2013-7811,
author = {Supreeth Achar and Stephen T. Nuske and Srinivasa G. Narasimhan},
title = {Compensating for Motion During Direct-Global Separation},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2013},
month = {December},
pages = {1481-1488},
}