Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives - Robotics Institute Carnegie Mellon University

Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives

Murtaza Dalal, Deepak Pathak, and Ruslan Salakhutdinov
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, May, 2021

Abstract

Despite the potential of reinforcement learning (RL) for building general-purpose
robotic systems, training RL agents to solve robotics tasks still remains challenging
due to the difficulty of exploration in purely continuous action spaces. Addressing
this problem is an active area of research with the majority of focus on improving
RL methods via better optimization or more efficient exploration. An alternate but
important component to consider improving is the interface of the RL algorithm
with the robot. In this work, we manually specify a library of robot action primitives
(RAPS), parameterized with arguments that are learned by an RL policy. These
parameterized primitives are expressive, simple to implement, enable efficient
exploration and can be transferred across robots, tasks and environments. We
perform a thorough empirical study across challenging tasks in three distinct
domains with image input and a sparse terminal reward. We find that our simple
change to the action interface substantially improves both the learning efficiency
and task performance irrespective of the underlying RL algorithm, significantly
outperforming prior methods which learn skills from offline expert data.

BibTeX

@conference{Dalal-2021-130154,
author = {Murtaza Dalal and Deepak Pathak and Ruslan Salakhutdinov},
title = {Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2021},
month = {May},
keywords = {reinforcement learning, robotic manipulation, motion primitives, hierarchical RL},
}