Flexi-modal and Multi-Machine User Interfaces
Conference Paper, Proceedings of 4th IEEE International Conference on Multimodal Interfaces (ICMI '02), pp. 343 - 348, October, 2002
Abstract
We describe our system which facilitates collaboration using multiple modalities, including speech, handwriting, gestures, gaze tracking, direct manipulation, large projected touch-sensitive displays, laser pointer tracking, regular monitors with a mouse and keyboard, and wirelessly-networked handhelds. Our system allows multiple, geographically dispersed participants to simultaneously and flexibly mix different modalities using the right interface at the right time on one or more machines. This paper discusses each of the modalities provided, how they were integrated in the system architecture, and how the user interface enabled one or more people to flexibly use one or more devices.
BibTeX
@conference{Myers-2002-8573,author = {Brad A. Myers and Robert Malkin and and Alex Waibel and Ben Bostwick and Robert C. Miller and Jie Yang and Matthias Denecke and Edgar Seemann and Jie Zhu and Choon Hong Peck and Dave Kong and Jeffrey Nichols and Bill Scherlis},
title = {Flexi-modal and Multi-Machine User Interfaces},
booktitle = {Proceedings of 4th IEEE International Conference on Multimodal Interfaces (ICMI '02)},
year = {2002},
month = {October},
pages = {343 - 348},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.