FusionVLAD: A Multi-View Deep Fusion Networks for Viewpoint-Free 3D Place Recognition - Robotics Institute Carnegie Mellon University

FusionVLAD: A Multi-View Deep Fusion Networks for Viewpoint-Free 3D Place Recognition

Peng Yin, Lingyun Xu, Ji Zhang, and Howie Choset
Journal Article, IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 2304 - 2310, April, 2021

Abstract

Real-time 3D place recognition is a crucial technology to recover from localization failure in applications like autonomous driving, last-mile delivery, and service robots. However, it is challenging for 3D place retrieval methods to be accurate, efficient, and robust to the variant viewpoints differences. In this letter, we propose FusionVLAD, a fusion-based network that encodes a multi-view representation of sparse 3D point clouds into viewpoint-free global descriptors. The system consists of two parallel branches: a spherical-view branch for orientation-invariant feature extraction, and the top-down view branch for translation-insensitive feature extraction. Furthermore, we design a parallel fusion module to enhance the combination of region-wise feature connection between the two branches. Experiments on two public datasets and two generated datasets show that our method outperforms state-of-the-art with robust place recognition accuracy and efficient inference time. Besides, FusionVLAD requires limited computation resources and makes it extremely suitable for low-cost robots' long-term place recognition task.

BibTeX

@article{Yin-2021-126950,
author = {Peng Yin and Lingyun Xu and Ji Zhang and Howie Choset},
title = {FusionVLAD: A Multi-View Deep Fusion Networks for Viewpoint-Free 3D Place Recognition},
journal = {IEEE Robotics and Automation Letters},
year = {2021},
month = {April},
volume = {6},
number = {2},
pages = {2304 - 2310},
}