Imitating Task and Motion Planning with Visuomotor Transformers
Abstract
Imitation learning is a powerful tool for training robot manipulation policies, allowing them to learn from expert demonstrations without manual programming or trial-and-error. However, common methods of data collection, such as human supervision, scale poorly, as they are time-consuming and labor-intensive. In contrast, Task and Motion Planning (TAMP) can autonomously generate large-scale datasets of diverse demonstrations. In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation. To that end, we present a novel imitation learning system called OPTIMUS that trains large-scale visuomotor Transformer policies by imitating a TAMP agent. OPTIMUS introduces a pipeline for generating TAMP data that is specifically curated for imitation learning and can be used to train performant transformer-based policies. In this paper, we present a thorough study of the design decisions required to imitate TAMP and demonstrate that OPTIMUS can solve a wide variety of challenging vision-based manipulation tasks with over 70 different objects, ranging from long-horizon pick-and-place tasks, to shelf and articulated object manipulation, achieving 70 to 80% success rates.
BibTeX
@conference{Dalal-2023-142756,author = {Murtaza Dalal and Ajay Mandlekar and Caelan Garrett and Ankur Handa and Ruslan Salakhutdinov and Dieter Fox},
title = {Imitating Task and Motion Planning with Visuomotor Transformers},
booktitle = {Proceedings of (CoRL) Conference on Robot Learning},
year = {2023},
month = {November},
keywords = {Imitation Learning, Task and Motion Planning, Transformers},
}