Toward Automatic Robot Instruction from Perception - Temporal Segmentation of Tasks from Human Hand Motion - Robotics Institute Carnegie Mellon University

Toward Automatic Robot Instruction from Perception – Temporal Segmentation of Tasks from Human Hand Motion

Sing Bing Kang and Katsushi Ikeuchi
Journal Article, IEEE Transactions on Robotics and Automation, Vol. 11, No. 5, pp. 670 - 681, October, 1995

Abstract

This paper describes work on the temporal segmentation of grasping task sequences based on human hand motion. The segmentation process results in the identification of motion breakpoints separating the different constituent phases of the grasping task. A grasping task is composed of three basic phases: pregrasp phase, static grasp phase, and manipulation phase. We show that by analyzing the fingertip polygon area (which is an indication of the hand preshape) and the speed of hand movement (which is an indication of the hand transportation), we can divide a task into meaningful action segments such as approach object (which corresponds to the pregrasp phase), grasp object, manipulate object, place object, and depart (a special case of the pregrasp phase which signals the termination of the task). We introduce a measure called the volume sweep rate, which is the product of the fingertip polygon area and the hand speed. The profile of this measure is also used in the determination of the task breakpoints.

BibTeX

@article{Kang-1995-14014,
author = {Sing Bing Kang and Katsushi Ikeuchi},
title = {Toward Automatic Robot Instruction from Perception - Temporal Segmentation of Tasks from Human Hand Motion},
journal = {IEEE Transactions on Robotics and Automation},
year = {1995},
month = {October},
volume = {11},
number = {5},
pages = {670 - 681},
}