Reinforcement Learning with Spatial Reasoning for Dexterous Robotic Manipulation
Abstract
Robotic manipulation in unstructured environments requires adaptability and the ability to handle a wide variety of objects and tasks. This thesis presents novel approaches for learning robotic manipulation skills using reinforcement learning (RL) with spatially-grounded action spaces, addressing the challenges of high-dimensional, continuous action spaces and alleviating the need for extensive training data.
Our first contribution, HACMan (Hybrid Actor-Critic Maps for Manipulation), introduces a hybrid actor-critic model that maps discrete and continuous actions to 3D object point clouds, enabling complex non-prehensile interactions based on the spatial features of the object. Our second contribution, HACMan++ (Spatially-Grounded Motion Primitives for Manipulation), extends the framework to more generalized manipulation. It includes a diverse set of parameterized motion primitives, allowing the robot to perform a wide range of tasks by chaining these primitives together.
Through extensive experiments in simulation and on real robot platforms, we demonstrate the effectiveness of our proposed approaches in learning complex, long-horizon manipulation tasks with strong generalization to novel objects and environments. The thesis contributes to the state-of-the-art of robotic manipulation by providing novel RL approaches that leverage spatially-grounded action spaces and motion primitives, opening up new possibilities for more intelligent and capable robotic systems.
BibTeX
@mastersthesis{Jiang-2024-141971,author = {Bowen Jiang},
title = {Reinforcement Learning with Spatial Reasoning for Dexterous Robotic Manipulation},
year = {2024},
month = {July},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-24-47},
keywords = {Reinforcement Learning, Manipulation, Robotics},
}