Learning Reactive and Predictive Differentiable Controllers for Switching Linear Dynamical Models
Abstract
Humans leverage the dynamics of the environment and their own bodies to accomplish challenging tasks such as grasping an object while walking past it or pushing off a wall to turn a corner. Such tasks often involve switching dynamics as the robot makes and breaks contact. Learning these dynamics is a challenging problem and prone to model inaccuracies, especially near contact regions. In this work, we present a framework for learning composite dynamical behaviors from expert demonstrations. We learn a switching linear dynamical model with contacts encoded in switching conditions as a close approximation of our system dynamics. We then use discrete-time LQR as the differentiable policy class for data-efficient learning of control to develop a control strategy that operates over multiple dynamical modes and takes into account discontinuities due to contact. In addition to predicting interactions with the environment, our policy effectively reacts to inaccurate predictions such as unanticipated contacts. Through simulation and real world experiments, we demonstrate generalization of learned behaviors to different scenarios and robustness to model inaccuracies during execution.
BibTeX
@conference{Saxena-2021-127435,author = {Saumya Saxena and Alex LaGrassa and Oliver Kroemer},
title = {Learning Reactive and Predictive Differentiable Controllers for Switching Linear Dynamical Models},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2021},
month = {May},
}