Virtual Space Teleconferencing Using a Sea of Cameras - Robotics Institute Carnegie Mellon University

Virtual Space Teleconferencing Using a Sea of Cameras

H. Fuchs, G. Bishop, K. Arthur, L. McMillan, R. Bajcsy, S. Lee, H. Farid, and Takeo Kanade
Conference Paper, Proceedings of 1st International Conference on Medical Robotics and Computer Assisted Surgery (MRCAS '94), pp. 161 - 167, June, 1994

Abstract

A new approach to telepresence is presented in which a multitude of stationary cameras are used to acquire both photometric and depth information. A virtual environment is constructed by displaying the acquired data from the remote site in accordance with the head position and orientation of a local participant. Shown are preliminary results of a depth image of a human subject calculated from 11 closely spaced video camera positions. A user wearing a head-mounted display walks around this 3D data that has been inserted into a 3D model of a simple room. Future systems based on this approach may exhibit more natural and intuitive interaction among participants than current 2D teleconferencing systems.

Notes
TR94-033

BibTeX

@conference{Fuchs-1994-13716,
author = {H. Fuchs and G. Bishop and K. Arthur and L. McMillan and R. Bajcsy and S. Lee and H. Farid and Takeo Kanade},
title = {Virtual Space Teleconferencing Using a Sea of Cameras},
booktitle = {Proceedings of 1st International Conference on Medical Robotics and Computer Assisted Surgery (MRCAS '94)},
year = {1994},
month = {June},
pages = {161 - 167},
}