Gesture Interface: Modeling and Learning - Robotics Institute Carnegie Mellon University

Gesture Interface: Modeling and Learning

Jie Yang, Yangsheng Xu, and C. S. Chen
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, Vol. 2, pp. 1747 - 1752, May, 1994

Abstract

This paper presents a method for developing a gesture-based system using a multidimensional hidden Markov model (HMM). Instead of using geometric features, gestures are converted into sequential symbols. HMMs are employed to represent the gestures and their parameters are learned from the training data. Based on "the most likely performance" criterion, the gestures can be recognized by evaluating the trained HMMs. We have developed a prototype to demonstrate the feasibility of the proposed method. The system achieved 99.78% accuracy for a 9 gesture isolated recognition task. Encouraging results were also obtained from experiments of continuous gesture recognition. The proposed method is applicable to any multidimensional signal representation gesture, and will be a valuable tool in telerobotics and human computer interfacing.

BibTeX

@conference{Yang-1994-13687,
author = {Jie Yang and Yangsheng Xu and C. S. Chen},
title = {Gesture Interface: Modeling and Learning},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {1994},
month = {May},
volume = {2},
pages = {1747 - 1752},
}