Multimodal Interfaces - Robotics Institute Carnegie Mellon University

Multimodal Interfaces

Alex Waibel, M. T. Vo, P. Duchnowski, and S. Manke
Journal Article, Artificial Intelligence Review, Vol. 10, No. 3, pp. 299 - 319, August, 1996

Abstract

In this paper, we present an overview of research in our laboratories on Multimodal Human Computer Interfaces. The goal for such interfaces is to free human computer interaction from the limitations and acceptance barriers due to rigid operating commands and keyboards as the only/main I/O-device. Instead we move to involve all available human communication modalities. These human modalities include Speech, Gesture and Pointing, Eye-Gaze, Lip Motion and Facial Expression, Handwriting, Face Recognition, Face Tracking, and Sound Localization.

BibTeX

@article{-1996-16169,
author = {Alex Waibel and M. T. Vo and P. Duchnowski and S. Manke},
title = {Multimodal Interfaces},
journal = {Artificial Intelligence Review},
year = {1996},
month = {August},
volume = {10},
number = {3},
pages = {299 - 319},
}