Learning 6-DOF Grasping Interaction via Deep 3D Geometry-aware Representations - Robotics Institute Carnegie Mellon University

Learning 6-DOF Grasping Interaction via Deep 3D Geometry-aware Representations

Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson, and Honglak Lee
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 3766 - 3773, May, 2018

Abstract

This paper focuses on the problem of learning 6-DOF grasping with a parallel jaw gripper in simulation. Compared to existing approaches that are specialized in three-dimensional grasping (i.e., top-down grasping or side-grasping), using a 6-DOF grasping model allows the robot to learn a richer set of grasping interactions given less physical constraints; hence, potentially enhancing the robustness of grasping and robot dexterity. However, learning 6-DOF grasping is challenging due to a high dimensional state space, difficulty in collecting large-scale data, and many variations of an object’s visual appearance (i.e., geometry, material, texture, and illumination). We propose the notion of a geometry-aware representation in grasping based on the assumption that knowledge of 3D geometry is at the heart of interaction. Our key idea is constraining and regularizing grasping interaction learning through 3D geometry prediction.

BibTeX

@conference{Yan-2018-113286,
author = {Xinchen Yan and Jasmine Hsu and Mohi Khansari and Yunfei Bai and Arkanath Pathak and Abhinav Gupta and James Davidson and Honglak Lee},
title = {Learning 6-DOF Grasping Interaction via Deep 3D Geometry-aware Representations},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2018},
month = {May},
pages = {3766 - 3773},
}