Multi-Agent Gesture Interpretation for Robotic Cable Harnessing - Robotics Institute Carnegie Mellon University

Multi-Agent Gesture Interpretation for Robotic Cable Harnessing

Richard Voyles and Pradeep Khosla
Conference Paper, Proceedings of IEEE Conference on Systems, Man, and Cybernetics, Vol. 2, pp. 1113 - 1118, October, 1995

Abstract

Gesture-Based Programming is our paradigm to ease the burden of programming robots. It is an extension of the human demonstration approach that includes encapsulated expertise to guide subtask segmentation and robust real-time execution. A variety of human gestures must be recognized to provide a useful and intuitive interface for the human demonstrator. While the full gesture-based programming environment has not yet been realized, this paper describes a multi-modal gesture recognition system that embodies many of the necessary elements to achieve true gesture-based programming. It begins with recognition of the gestures of a human demonstrating a trajectory. The execution agents then try to repeat the trajectory while observing corrective gestures from the teacher. Similar multi-agent networks are used for both training and execution.

BibTeX

@conference{Voyles-1995-14006,
author = {Richard Voyles and Pradeep Khosla},
title = {Multi-Agent Gesture Interpretation for Robotic Cable Harnessing},
booktitle = {Proceedings of IEEE Conference on Systems, Man, and Cybernetics},
year = {1995},
month = {October},
volume = {2},
pages = {1113 - 1118},
}