Robot Navigation from Human Demonstration: Learning Control Behaviors
Abstract
When working alongside human collaborators in dynamic environments such as a disaster recovery, an unmanned ground vehicle (UGV) may require fast field adaptation to perform its duties or learn novel tasks. In disaster recovery situations, personnel and equipment are constrained, so training must be accomplished with minimal human supervision. In this paper, we introduce a novel framework which uses learned visual perception and inverse optimal control trained with minimal human supervisory examples. This approach is used to learn to mimic navigation behavior and is demonstrated through extensive evaluation in a real-world environment. Finally, we demonstrate the ability to learn an additional behavior with minimal human demonstration in the field.
BibTeX
@conference{Wigness-2018-122416,author = {Maggie Wigness and John G. Rogers and Luis E. Navarro-Serment},
title = {Robot Navigation from Human Demonstration: Learning Control Behaviors},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2018},
month = {May},
pages = {1150 - 1157},
}