Unsupervised Online Learning for Robotic Interestingness with Visual Memory
Abstract
Autonomous robots frequently need to detect “interesting” scenes to decide on further exploration, or to decide which data to share for cooperation. These scenarios often require fast deployment with little or no training data. Prior work considers “interestingness” based on data from the same distribution. Instead, we propose to develop a method that automatically adapts online to the environment to report interesting scenes quickly. To address this problem, we develop a novel translation-invariant visual memory and design a three-stage architecture for long-term, short-term, and online learning, which enables the system to learn human-like experience, environmental knowledge, and online adaption, respectively. With this system, we achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment. We show comparable performance to supervised methods for robot exploration scenarios showing the efficacy of our approach. We expect that the presented method will play an important role in the robotic interestingness recognition exploration tasks.
BibTeX
@article{Wang-2021-130093,author = {Chen Wang and Yuheng Qiu and Wenshan Wang and Yafei Hu and Seungchan Kim and Sebastian Scherer},
title = {Unsupervised Online Learning for Robotic Interestingness with Visual Memory},
journal = {IEEE Transactions on Robotics},
year = {2021},
month = {November},
keywords = {Unsupervised Learning, Online Learning, Visual Memory, Robotic Interestingness},
}