Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects - Robotics Institute Carnegie Mellon University
Dynamic Seethroughs: Synthesizing Hidden Views of Moving Objects
Project Head: Takeo Kanade

This project involves creating an illusion of seeing moving objects through occluding surfaces in a video. This illusion is achieved by transferring information from a camera viewing the occluded area. In typical view interpolation approaches for 3D scenes, some form of correspondence across views is required. For occluded areas, establishing direct correspondence is impossible as information is missing in one of the views. Instead, we use a 2D projective invariant to capture information about occluded objects (which may be moving). Since invariants are quantities that do not change across views, a visually compelling rendering of hidden areas is achieved without the need for explicit correspondences. A piece-wise planar model of the scene allows the entire rendering process to take place without any 3D reconstruction, while still producing visual parallax. Because of the simplicity and robustness of the 2D invariant, we are able to transfer both static backgrounds and moving objects in real time. A complete working system has been implemented that runs live at 5Hz. Applications for this technology include the ability to look through corners at tight intersections for automobile safety, concurrent visualization of a surveillance camera network, and monitoring systems for patients/elderly/children.