Pre- and post-contact policy decomposition for planar contact manipulation under uncertainty
Abstract
We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A* search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors.
BibTeX
@article{Koval-2015-6009,author = {Michael Koval and Nancy Pollard and Siddhartha Srinivasa},
title = {Pre- and post-contact policy decomposition for planar contact manipulation under uncertainty},
journal = {International Journal of Robotics Research},
year = {2015},
month = {August},
}