Pre- and Post-Contact Policy Decomposition for Planar Contact Manipulation Under Uncertainty
Abstract
We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity.
Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A∗ search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors.
BibTeX
@article{Koval-2016-103458,author = {Michael C. Koval and Nancy Pollard and Siddhartha S. Srinivasa},
title = {Pre- and Post-Contact Policy Decomposition for Planar Contact Manipulation Under Uncertainty},
journal = {International Journal of Robotics Research: Special Issue on RSS '14},
year = {2016},
month = {January},
volume = {35},
number = {1},
pages = {244 - 264},
keywords = {Manipulation, Contact},
}