Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks - Robotics Institute Carnegie Mellon University

Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks

Murtaza Dalal, Tarun Chiruvolu, Devendra Chaplot, and Ruslan Salakhutdinov
Conference Paper, Proceedings of (CoRL) Conference on Robot Learning, May, 2024

Abstract

Large Language Models (LLMs) have been shown to be capable of performing high-level planning for long-horizon robotics tasks, yet existing methods require access to a pre-defined skill library (e.g. picking, placing, pulling, pushing, navigating). However, LLM planning does not address how to design or learn those behaviors, which remains challenging particularly in long-horizon settings. Furthermore, for many tasks of interest, the robot needs to be able to adjust its behavior in a fine-grained manner, requiring the agent to be capable of modifying low-level control actions. Can we instead use the internet-scale knowledge from LLMs for high-level policies, guiding reinforcement learning (RL) policies to efficiently solve robotic control tasks online without requiring a pre-determined set of skills? In this paper, we propose Plan-Seq-Learn (PSL): a modular approach that uses motion planning to bridge the gap between abstract language and learned low-level control for solving long-horizon robotics tasks from scratch. We demonstrate that PSL achieves state-of-the-art results on over 25 challenging robotics tasks with up to 10 stages. PSL solves long-horizon tasks from raw visual input spanning four benchmarks at success rates of over 85%, out-performing language-based, classical, and end-to-end approaches.

BibTeX

@conference{Dalal-2024-142758,
author = {Murtaza Dalal and Tarun Chiruvolu and Devendra Chaplot and Ruslan Salakhutdinov},
title = {Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks},
booktitle = {Proceedings of (CoRL) Conference on Robot Learning},
year = {2024},
month = {May},
}