Policy Blending and Recombination for Multimodal Contact-Rich Tasks - Robotics Institute Carnegie Mellon University

Policy Blending and Recombination for Multimodal Contact-Rich Tasks

Tetsuya Narita and Oliver Kroemer
Journal Article, IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 2721 - 2728, April, 2021

Abstract

Multimodal information such as tactile, proximity and force sensing is essential for performing stable contact-rich manipulations. However, coupling multimodal information and motion control still remains a challenging topic. Rather than learning a monolithic skill policy that takes in all feedback signals at all times, skills should be divided into phases and learn to only use the sensor signals applicable to that phase. This makes learning the primitive policies for each phase easier, and allows the primitive policies to be more easily reused among different skills. However, stopping and abruptly switching between each primitive policy results in longer execution times and less robust behaviours. We therefore propose a blending approach to seamlessly combining the primitive policies into a reliable combined control policy. We evaluate both time-based and state-based blending approaches. The resulting approach was successfully evaluated in simulation and on a real robot, with an augmented finger vision sensor, on: opening a cap, turning a dial and flipping a breaker tasks. The evaluations show that the blended policies with multimodal feedback can be easily learned and reliably executed.

Notes
Best Manipulation Paper finalist

BibTeX

@article{Narita-2021-128850,
author = {Tetsuya Narita and Oliver Kroemer},
title = {Policy Blending and Recombination for Multimodal Contact-Rich Tasks},
journal = {IEEE Robotics and Automation Letters},
year = {2021},
month = {April},
volume = {6},
number = {2},
pages = {2721 - 2728},
}