Reinforcement Planning: RL for Optimal Planners - Robotics Institute Carnegie Mellon University

Reinforcement Planning: RL for Optimal Planners

Tech. Report, CMU-RI-TR-10-14, Robotics Institute, Carnegie Mellon University, April, 2010

Abstract

Search based planners such as A* and Dijkstra's algorithm are proven methods for guiding today's robotic systems. Although such planners are typically based upon a coarse approximation of reality, they are nonetheless valuable due to their ability to reason about the future, and to generalize to previously unseen scenarios. However, encoding the desired behavior of a system into the underlying cost function used by the planner can be a tedious and error-prone task. We introduce Reinforcement Planning, which extends gradient based reinforcement learning algorithms to automatically learn useful cost functions for optimal planners. Reinforcement Planning presents several advantages over other learning applications involving planners in that it is not limited by the expertise of a human demonstrator, and that it also recognizes that the domain of the planner is a simplified model of the world. We demonstrate the effectiveness of our method in learning to solve a noisy physical simulation of the well-known "marble maze" toy.

BibTeX

@techreport{Zucker-2010-10420,
author = {Matthew Zucker and J. Andrew (Drew) Bagnell},
title = {Reinforcement Planning: RL for Optimal Planners},
year = {2010},
month = {April},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-10-14},
}