Appearance-Based Virtual View Generation of Temporally-Varying Events from Multi-Camera Images in the 3D Room - Robotics Institute Carnegie Mellon University

Appearance-Based Virtual View Generation of Temporally-Varying Events from Multi-Camera Images in the 3D Room

Hideo Saito, Shigeyuki Baba, Makoto Kimura, Sundar Vedula, and Takeo Kanade
Tech. Report, CMU-CS-99-127, Computer Science Department, Carnegie Mellon University, April, 1999

Abstract

In this paper, we present an "appearance?ased" virtual view generation method for temporally?arying events taken by multiple cameras of the "3D Room", developed by our group. With this method, we can generate images from any virtual view point between two selected real views. The virtual appearance view generation method is based on simple interpolation between two selected views. The correspondence between the views are automatically generated from the multiple images by use of the volumetric model shape reconstruction framework. Since the correspondences are obtained by the recovered volumetric model, even occluded regions in the views can be correctly interpolated in the virtual view images. The virtual view image sequences are presented for demonstrating the performance of the virtual view image generation in the 3D Room.

BibTeX

@techreport{Saito-1999-14890,
author = {Hideo Saito and Shigeyuki Baba and Makoto Kimura and Sundar Vedula and Takeo Kanade},
title = {Appearance-Based Virtual View Generation of Temporally-Varying Events from Multi-Camera Images in the 3D Room},
year = {1999},
month = {April},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-CS-99-127},
keywords = {Virtualized reality, 3D scene modeling, Multi?amera images, Image based rendering, View interpolation},
}