Virtual Space Teleconferencing Using a Sea of Cameras
Conference Paper, Proceedings of 1st International Conference on Medical Robotics and Computer Assisted Surgery (MRCAS '94), pp. 161 - 167, June, 1994
Abstract
A new approach to telepresence is presented in which a multitude of stationary cameras are used to acquire both photometric and depth information. A virtual environment is constructed by displaying the acquired data from the remote site in accordance with the head position and orientation of a local participant. Shown are preliminary results of a depth image of a human subject calculated from 11 closely spaced video camera positions. A user wearing a head-mounted display walks around this 3D data that has been inserted into a 3D model of a simple room. Future systems based on this approach may exhibit more natural and intuitive interaction among participants than current 2D teleconferencing systems.
Notes
TR94-033
TR94-033
BibTeX
@conference{Fuchs-1994-13716,author = {H. Fuchs and G. Bishop and K. Arthur and L. McMillan and R. Bajcsy and S. Lee and H. Farid and Takeo Kanade},
title = {Virtual Space Teleconferencing Using a Sea of Cameras},
booktitle = {Proceedings of 1st International Conference on Medical Robotics and Computer Assisted Surgery (MRCAS '94)},
year = {1994},
month = {June},
pages = {161 - 167},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.