Visual Identification of Articulated Object Parts - Robotics Institute Carnegie Mellon University

Visual Identification of Articulated Object Parts

Vicky Zeng, Tabitha Edith Lee, Jacky Liang, and Oliver Kroemer
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, July, 2021

Abstract

As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where the articulated parts are moving according to their kinematic constraints. In this work, we propose training a neural network through large-scale domain randomization to identify the articulation type of object parts from a single image observation. Training data is generated via photorealistic rendering in simulation. Our proposed model predicts motion residual flows of object parts, and these residuals are used to determine the articulation type and parameters. We train the network on six object categories with 149 objects and 100K rendered images, achieving an accuracy of 82.5%. Experiments show our method generalizes to novel object categories in simulation and can be applied to real-world images without fine-tuning.

BibTeX

@conference{Zeng-2021-128856,
author = {Vicky Zeng and Tabitha Edith Lee and Jacky Liang and Oliver Kroemer},
title = {Visual Identification of Articulated Object Parts},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2021},
month = {July},
}