Multi-Agent Perception for Human/Robot Interaction: A Framework for Intuitive Trajectory Modification
Tech. Report, CMU-RI-TR-94-33, Robotics Institute, Carnegie Mellon University, September, 1994
Abstract
An application of distributed perception to a novel human/computer interface is presented. A multi-agent network has been applied to the task of modifying a robotic trajectory based on very sparse physical inputs from the user. The user conveys intentions by nudging the end effector, instrumented with a wrist force/torque sensor, in an intuitive manner. In response, each agent interprets these sparse inputs with the aid of a local, fuzzified, heuristic model of a particular parameter or trajectory shape. The agents then independently determine the confidence of their respective findings and distributed arbitration resolves the interpretation through voting.
BibTeX
@techreport{Voyles-1994-13765,author = {Richard Voyles and Pradeep Khosla},
title = {Multi-Agent Perception for Human/Robot Interaction: A Framework for Intuitive Trajectory Modification},
year = {1994},
month = {September},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-94-33},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.