Modeling, Combining, and Rendering Dynamic Real-World Events from Image Sequences - Robotics Institute Carnegie Mellon University

Modeling, Combining, and Rendering Dynamic Real-World Events from Image Sequences

Sundar Vedula, Peter Rander, Hideo Saito, and Takeo Kanade
Conference Paper, Proceedings of 4th International Conference on Virtual Systems & Multimedia: Future Fusion: Application Realities for the Virtual Age (VSMM '98), pp. 326 - 332, November, 1998

Abstract

Virtualized Reality creates time-varying three-dimensional models and virtual images from image sequences. In this paper, we present two recent enhancements to Virtualized Reality. We present Model Enhanced Stereo (MES) as a method to use widely separated images to iteratively improve the quality of each local stereo output in a multi-camera system. We then show, using an example, how Virtualized Reality models of two different events are integrated with each other, and with a synthetic virtual model. In addition, we also develop a new calibration method that allows simultaneous calibration of a large number of cameras without visibility problems. The method goes from capturing real image sequences, integrating two or more events with a static or time-varying virtual model, to virtual image sequence generation.

BibTeX

@conference{Vedula-1998-14798,
author = {Sundar Vedula and Peter Rander and Hideo Saito and Takeo Kanade},
title = {Modeling, Combining, and Rendering Dynamic Real-World Events from Image Sequences},
booktitle = {Proceedings of 4th International Conference on Virtual Systems & Multimedia: Future Fusion: Application Realities for the Virtual Age (VSMM '98)},
year = {1998},
month = {November},
pages = {326 - 332},
}