Fast Sequence-matching Enhanced Viewpoint-invariant 3D Place Recognition
Abstract
Recognizing the same place under variant viewpoint differences is the fundamental capability for human beings and animals. However, such a strong place recognition ability in robotics is still an unsolved problem. Extracting local invariant descriptors from the same place under various viewpoint differences is difficult. This paper seeks to provide robots with a human-like place recognition ability using a new 3D feature learning method. This paper proposes a novel lightweight 3D place recognition and fast sequence-matching to achieve robust 3D place recognition, capable of recognizing places from a previous trajectory regardless of viewpoints and temporary observation differences. Specifically, we extracted the viewpoint-invariant place feature from 2D spherical perspectives by leveraging spherical harmonics' orientation-equivalent property. To improve sequence matching efficiency, we designed a coarse-to-fine fast sequence matching mechanism to balance the matching efficiency and accuracy. Despite the apparent simplicity, our proposed approach outperforms the relative state-of-the-art. In both public and self-gathered datasets with orientation/translation differences or noise observations, our method can achieve above 95% average recall for the best match with only 18% inference time of PointNet-based place recognition methods.
BibTeX
@article{Yin-2021-127900,author = {Peng Yin and Fuying Wang and Anton Egorov and Jiafan Hou and Zhenzhong Jia and Jianda Han},
title = {Fast Sequence-matching Enhanced Viewpoint-invariant 3D Place Recognition},
journal = {IEEE Transactions on Industrial Electronics},
year = {2021},
month = {February},
keywords = {LiDAR SLAM, 3D Place Recognition, Viewpoint Invariant},
}