Learning to Manipulate Unknown Objects in Clutter by Reinforcement
Abstract
We present a fully autonomous robotic system for grasping objects in dense clutter. The objects are unknown and have arbitrary shapes. Therefore, we cannot rely on prior models. Instead, the robot learns online, from scratch, to manipulate the objects by trial and error. Grasping objects in clutter is significantly harder than grasping isolated objects, because the robot needs to push and move objects around in order to create sufficient space for the fingers. These pre-grasping actions do not have an immediate utility, and may result in unnecessary delays. The utility of a pre-grasping action can be measured only by looking at the complete chain of consecutive actions and effects. This is a sequential decision-making problem that can be cast in the reinforcement learning framework. We solve this problem by learning the stochastic transitions between the observed states, using nonparametric density estimation. The learned transition function is used only for re-calculating the values of the executed actions in the observed states, with different policies. Values of new state-actions are obtained by regressing the values of the executed actions. The state of the system at a given time is a depth (3D) image of the scene. We use spectral clustering for detecting the different objects in the image. The performance of our system is assessed on a robot with real-world objects.
BibTeX
@conference{Boularias-2015-5904,author = {Abdeslam Boularias and J. Andrew (Drew) Bagnell and Anthony (Tony) Stentz},
title = {Learning to Manipulate Unknown Objects in Clutter by Reinforcement},
booktitle = {Proceedings of 29th AAAI Conference on Artificial Intelligence (AAAI '15)},
year = {2015},
month = {January},
pages = {1336 - 1342},
publisher = {AAAI},
}