Spatio-Temporal View Interpolation - Robotics Institute Carnegie Mellon University

Spatio-Temporal View Interpolation

Sundar Vedula, Simon Baker, and Takeo Kanade
Tech. Report, CMU-RI-TR-01-35, Robotics Institute, Carnegie Mellon University, September, 2001

Abstract

We propose an algorithm for creating novel views of a non-rigidly varying dynamic event by combining images captured from different positions, at different times. The algorithm operates by combining images captured across space and time to compute voxel models of the scene shape at each time instant, and dense 3D scene flow between the voxel models (the non-rigid motion of every point in the scene). To interpolate in time the voxel models are ``flowed'' using the appropriate scene flow and a smooth surface fit to the result. The novel image is then computed by ray-casting to the surface at the intermediate time, following the scene flow to the neighboring time instants, projecting into the input images at those times, and finally blending the results. We use the algorithm to create re-timed slow-motion fly-by movies of real-world events.

BibTeX

@techreport{Vedula-2001-8311,
author = {Sundar Vedula and Simon Baker and Takeo Kanade},
title = {Spatio-Temporal View Interpolation},
year = {2001},
month = {September},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-01-35},
keywords = {Image Based Rendering, View Synthesis, Scene Flow, 3D Modeling},
}