Object-centric Forward Modeling for Model Predictive Control
Abstract
We present an approach to learn an object-centric forward model, and show that this allows us to plan for sequences of actions to achieve distant desired goals. We propose to model a scene as a collection of objects, each with an explicit spatial location and implicit visual feature, and learn to model the effects of actions using random interaction data. Our model allows capturing the robot-object and object-object interactions, and leads to more sample-efficient and accurate predictions. We show that this learned model can be leveraged to search for action sequences that lead to desired goal configurations, and that in conjunction with a learned correction module, this allows for robust closed loop execution. We present experiments both in simulation and the real world, and show that our approach improves over alternate implicit or pixel-space forward models. Please see our project page for result videos.
BibTeX
@conference{Ye-2020-126804,author = {Yufei Ye and Dhiraj Gandhi and Abhinav Gupta and Shubham Tulsiani},
title = {Object-centric Forward Modeling for Model Predictive Control},
booktitle = {Proceedings of (CoRL) Conference on Robot Learning},
year = {2020},
month = {May},
pages = {100 - 109},
}