Toward Gesture-Based Programming: Agent-Based Haptic Skill Acquisition and Interpretation
Abstract
Programming by human demonstration is a new paradigm for the development of robotic applications that focuses on the needs of task experts rather than programming experts. The traditional text-based programming paradigm demands the user be an expert in a particular programming language and further demands that the user can translate the task into this foreign language. This level of programming expertise generally precludes the user from having detailed task expertise because his/her time is devoted to the practice of programming, not the practice of the task. The goal of programming by demonstration is to eliminate both the programming language expertise and, more importantly, the expertise required to translate the task into the language. Gesture-Based Programming is a new form of programming by human demonstration that views the demonstration as a series of inexact ?gestures? that convey the ?intention? of the task strategy, not the details of the strategy itself. This is analogous to the type of ?programming? that occurs between human teacher and student and is more intuitive for both. However, it requires a ?shared ontology? between teacher and student -- in the form of a common skill database -- to abstract the observed gestures to meaningful intentions that can be mapped onto previous experiences and previously-acquired skills. This thesis investigates several key components required for a Gesture-Based Programming environment that revolve around a common, though seemingly unrelated theme: sensor calibration. A novel approach to multi-axis sensor calibration based on shape and motion decomposition was developed as a companion to the development of some novel, fluid-based, wearable fingertip sensors for observing contact gestures during demonstration. ?Shape from Motion Calibration? does not require explicit refer-ences for each and every measurement. For force sensors, unknown, randomly-applied loads result in an accurate calibration matrix. The intrinsic ?shape? of the input/output mapping is extracted from the random ?motion? of the applied load through the sensing space. This ability to extract intrinsic structure led to a convenient eigenspace learning mechanism that provides three necessary pieces of the task interpretation and abstraction process: sensorimotor primitive acquisition (populating the skill database), primitive identification (relating gestures to skills in the database), and primitive transformation (?skill morphing?). This thesis demonstrates the technique for learning, identifying, and morphing simple manipulative primitives on a PUMA robot and interpreting the gestures of a human demonstrator in order to program a robot to perform the same task.
BibTeX
@phdthesis{Voyles-1997-14454,author = {Richard Voyles},
title = {Toward Gesture-Based Programming: Agent-Based Haptic Skill Acquisition and Interpretation},
year = {1997},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-97-36},
}