Reducing adaptation latency for multi-concept visual perception in outdoor environments
Abstract
Multi-concept visual classification is emerging as a common environment perception technique, with applications in autonomous mobile robot navigation. Supervised visual classifiers are typically trained with large sets of images, hand annotated by humans with region boundary outlines followed by label assignment. This annotation is time consuming, and unfortunately, a change in environment requires new or additional labeling to adapt visual perception. The time is takes for a human to label new data is what we call adaptation latency. High adaptation latency is not simply undesirable but may be infeasible for scenarios with limited labeling time and resources. In this paper, we introduce a labeling framework to the environment perception domain that significantly reduces adaptation latency using unsupervised learning in exchange for a small amount of label noise. Using two real-world datasets we demonstrate the speed of our labeling framework, and its ability to collect environment labels that train high performing multi-concept classifiers. Finally, we demonstrate the relevance of this label collection process for visual perception as it applies to navigation in outdoor environments.
BibTeX
@conference{Wigness-2016-101672,author = {Maggie Wigness and John G. Rogers and Luis Ernesto Navarro-Serment and Arne Suppe and Bruce A. Draper},
title = {Reducing adaptation latency for multi-concept visual perception in outdoor environments},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2016},
month = {October},
pages = {2784 - 2791},
keywords = {unsupervised learning, image classification, mobile robots, path planning, robot vision},
}