Modelling Human Assembly Actions from Observation - Robotics Institute Carnegie Mellon University

Modelling Human Assembly Actions from Observation

George Paul, Yunde Jiar, Mark D. Wheeler, and Katsushi Ikeuchi
Conference Paper, Proceedings of IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI '96), pp. 357 - 364, December, 1996

Abstract

This paper describes a system which can model an assembly task performed by a human. The actions are recorded in real-time using a stereo system. The assembled objects and the fingers of the hand are tracked through the image sequence. We use the spatial relations between the fingers and the objects to temporally segment the task into approach, pre-manipulate, manipulate and depart phases. We interpret the actions in each segment broadly into grasp, push, fine-motion etc. We then analyze the contact relations between objects during the manipulate phase to reconstruct the fine motion path of the manipulated object. The fine motion in configuration space is a series of connected path segments lying on the features (c-surfaces) of the configuration space obstacle. We project the observed configurations onto these c-surfaces and reconstruct the path segments. The connected path segments form the fine motion path. We demonstrate the system using the peg in hole task.

BibTeX

@conference{Paul-1996-14262,
author = {George Paul and Yunde Jiar and Mark D. Wheeler and Katsushi Ikeuchi},
title = {Modelling Human Assembly Actions from Observation},
booktitle = {Proceedings of IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI '96)},
year = {1996},
month = {December},
pages = {357 - 364},
}