Appearance-Based Virtual-View Generation for Fly Through in a Real Dynamic Scene - Robotics Institute Carnegie Mellon University

Appearance-Based Virtual-View Generation for Fly Through in a Real Dynamic Scene

Shigeyuki Baba, Hideo Saito, Sundar Vedula, Kong Man Cheung, and Takeo Kanade
Conference Paper, Proceedings of Joint EUROGRAPHICS and IEEE TCVG Symposium on Visualization, pp. 179 - 188, May, 2000

Abstract

We present appearance-based Virtual view generation which allows viewers to fly through a real dynamic scene. The scene is captured by synchronized multiple cameras. Arbitrary views are generated by interpolating two original camera-view images near the given viewpoint. The quality of the generated synthetic view is determined by the precision, consistency and density of correspondences between the two images. All or most of previous work that uses interpolation extracts the correspondences from these two images. However, not only is it difficult to do so reliably (the task requires a good stereo algorithm), but also the two images alone sometimes do not have enough information, due to problems such as occlusion. Instead, we take advantage of the fact that we have many views, from which we can extract much more reliable and comprehensive 3D geometry of the scene as a 3D model. The dense and precise correspondences between the two images, to be used for interpolation, are derived from this constructed 3D model. Our method of 3D modeling from multiple images uses the Multiple Baseline Stereo method and Shape from Silhouette method.

BibTeX

@conference{Baba-2000-8028,
author = {Shigeyuki Baba and Hideo Saito and Sundar Vedula and Kong Man Cheung and Takeo Kanade},
title = {Appearance-Based Virtual-View Generation for Fly Through in a Real Dynamic Scene},
booktitle = {Proceedings of Joint EUROGRAPHICS and IEEE TCVG Symposium on Visualization},
year = {2000},
month = {May},
pages = {179 - 188},
}