PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis - Robotics Institute Carnegie Mellon University
Loading Events

PhD Speaking Qualifier

May

4
Tue
Helen Jiang PhD Student Robotics Institute,
Carnegie Mellon University
Tuesday, May 4
10:00 am to 11:00 am
PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

Abstract:
When humans grasp objects in the real world, we often move our arm to hold the object in a different pose where we can use it. In contrast, typical lab settings only study the stability of the grasp immediately after lifting, without any subsequent re-positioning of the arm. However, an object’s stability could vary widely based on its holding pose, as the gravitational torque and gripper contact forces could change completely. To facilitate the study of how holding poses affect grasp stability, we present PoseIt, a novel multi-modal dataset that contains data collected from a full cycle of grasping an object, re-positioning the arm to one of the sampled poses, and shaking the object. Using data from PoseIt, we can formulate and tackle the task of predicting whether a grasped object is stable in a particular held pose. We train an LSTM classifier which achieves 85% accuracy on the proposed task, and our experimental results show that our classifiers can also generalize to unseen objects and poses. Finally, we compare different tactile sensors for the stability prediction task, demonstrating that the classifier performs better when trained on GelSight data than data collected from the WSG-DSA pressure array sensor.

Committee:
Wenzhen Yuan (advisor)
Abhinav Gupta (advisor)
David Held
Thomas Weng