Learning robot motion control from demonstration and human advice
Abstract
As robots become more commonplace within society, the need for tools that enable non-robotics-experts to develop control algorithms, or policies, will increase. Learning from Demonstration (LfD) offers one promising approach, where the robot learns a policy from teacher task executions. In this work we present an algorithm that incorporates human teacher feedback to enable policy improvement from learner experience within an LfD framework. We present two implementations of this algorithm, that differ in the sort of teacher feedback they provide. In the first implementation, called Binary Critiquing (BC), the teacher provides a binary indication that highlights poorly performing portions of the execution. In the second implementation, called Advice-Operator Policy Improvement (A-OPI), the teacher provides a correction on poorly performing portions of the student execution. Most notably, these corrections are continuous-valued and appropriate for low level motion control action spaces. The algorithms are applied to validation domains, one simulated and one a Segway RMP platform. For both, policy performance is found to improve with teacher feedback. Specifically, with BC learner execution success and efficiency come to exceed teacher performance. With A-OPI task success and accuracy are shown to be similar or superior to the typical LfD approach of correcting behavior through more teacher demonstrations.
BibTeX
@conference{Argall-2009-17076,author = {Brenna Argall and Brett Browning and Manuela Veloso},
title = {Learning robot motion control from demonstration and human advice},
booktitle = {Proceedings of AAAI '09 Spring Symposium on Agents that Learn from Human Teachers},
year = {2009},
month = {March},
pages = {8 - 15},
keywords = {learning robot motion control, learning from demonstration, teacher feedback},
}