Vision-Based Self-Assembly for Modular Multirotor Structures - Robotics Institute Carnegie Mellon University

Vision-Based Self-Assembly for Modular Multirotor Structures

Yehonathan Litman, Neeraj Gandhi, Linh Thi Xuan Phan, and David Saldaña
Journal Article, IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 2202 - 2208, April, 2021

Abstract

Modular aerial robots can adapt their shape to suit a wide range of tasks, but developing efficient self-reconfiguration algorithms is still a challenge. Self-reconfiguration algorithms in the literature rely on high-accuracy global positioning systems which are not realistic for real-world applications. In this letter, we study self-reconfiguration algorithms using a combination of low-accuracy global positioning systems (e.g., GPS) and on-board relative positioning (e.g. visual sensing) for precise docking actions. We present three algorithms: 1) parallelized self-assembly sequencing that minimizes the number of serial “docking steps”; 2) parallelized self-assembly sequencing that minimizes total distance traveled by modules; and 3) parallelized self-reconfiguration that breaks an initial structure down as little as possible before assembling a new structure. The algorithms take into account the constraints of the local sensors and use heuristics to provide a computationally efficient solution for the combinatorial problem. Our evaluation in 2-D and 3-D simulations show that the algorithms scale with the number of modules and structure shape.

BibTeX

@article{Litman-2021-126872,
author = {Yehonathan Litman and Neeraj Gandhi and Linh Thi Xuan Phan and David Saldaña},
title = {Vision-Based Self-Assembly for Modular Multirotor Structures},
journal = {IEEE Robotics and Automation Letters},
year = {2021},
month = {April},
volume = {6},
number = {2},
pages = {2202 - 2208},
}