Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects - Robotics Institute Carnegie Mellon University

Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects

Conference Paper, Proceedings of 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR '09), pp. 111 - 114, October, 2009

Abstract

This paper presents a method to create an illusion of seeing moving objects through occluding surfaces in a video. This illusion is achieved by transferring information from a camera viewing the occluded area. In typical view interpolation approaches for 3D scenes, some form of correspondence across views is required. For occluded areas, establishing direct correspondence is impossible as information is missing in one of the views. Instead, we use a 2D projective invariant to capture information about occluded objects (which may be moving). Since invariants are quantities that do not change across views, a visually compelling rendering of hidden areas is achieved without the need for explicit correspondences. A piece-wise planar model of the scene allows the entire rendering process to take place without any 3D reconstruction, while still producing visual parallax. Because of the simplicity and robustness of the 2D invariant, we are able to transfer both static backgrounds and moving objects in real time. A complete working system has been implemented that runs live at 5Hz. Applications for this technology include the ability to look through corners at tight intersections for automobile safety, concurrent visualization of a surveillance camera network, and monitoring systems for patients/elderly/children.

BibTeX

@conference{Barnum-2009-10353,
author = {Peter Barnum and Yaser Ajmal Sheikh and Ankur Datta and Takeo Kanade},
title = {Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects},
booktitle = {Proceedings of 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR '09)},
year = {2009},
month = {October},
pages = {111 - 114},
}