AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration
Abstract
Few-shot object detection has attracted increasing attention and rapidly progressed in recent years. However, the requirement of an exhaustive offline fine-tuning stage in existing methods is time-consuming and significantly hinders their usage in online applications such as autonomous exploration of low-power robots. We find that their major limitation is that the little but valuable information from a few support images is not fully exploited. To solve this problem, we propose a brand new architecture, AirDet, and surprisingly find that, by learning class-agnostic relation with the support images in all modules, including cross-scale object proposal network, shots aggregation module, and localization network, AirDet without fine-tuning achieves comparable or even better results than many fine-tuned methods, reaching up to 30-40% improvements. We also present solid results of onboard tests on real-world exploration data from the DARPA Subterranean Challenge, which strongly validate the feasibility of AirDet in robotics. To the best of our knowledge, AirDet is the first feasible few-shot detection method for autonomous exploration of low-power robots. The code and pre-trained models are released at https://github.com/Jaraxxus-Me/AirDet.
BibTeX
@conference{Li-2022-133196,author = {Bowen Li and Chen Wang and Pranay Reddy and Seungchan Kim and Sebastian Scherer},
title = {AirDet: Few-Shot Detection without Fine-tuning for Autonomous Exploration},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2022},
month = {October},
publisher = {Springer},
keywords = {Few-shot object detection, Online, Robot exploration},
}