Pre- and Post-Contact Policy Decomposition for Planar Contact Manipulation Under Uncertainty - Robotics Institute Carnegie Mellon University

Pre- and Post-Contact Policy Decomposition for Planar Contact Manipulation Under Uncertainty

Conference Paper, Proceedings of Robotics: Science and Systems (RSS '14), July, 2014

Abstract

We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable- resolution discretization of the state space to solve for a post- contact policy as a pre-computation step. Then, at runtime, we use an A∗ search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline policy.

BibTeX

@conference{Koval-2014-7897,
author = {Michael Koval and Nancy Pollard and Siddhartha Srinivasa},
title = {Pre- and Post-Contact Policy Decomposition for Planar Contact Manipulation Under Uncertainty},
booktitle = {Proceedings of Robotics: Science and Systems (RSS '14)},
year = {2014},
month = {July},
}