Learning Primitive Skills for Mobile Robots
Abstract
Achieving effective task performance on real mobile robots is a great challenge when hand-coding algorithms, both due to the amount of effort involved and manually tuned parameters required for each skill. Learning algorithms instead have the potential to lighten up this challenge by using one single set of training parameters for learning different skills, but the question of the feasibility of such learning in real robots remains a research pursuit. We focus on a kind of mobile robot system - the robot soccer “small-size” domain, in which tactical and high-level team strategies build upon individual robot ball-based skills. In this paper, we present our work using a Deep Reinforcement Learning algorithm to learn three real robot primitive skills in continuous action space: go-to-ball, turn-and-shoot and shoot-goalie, for which there is a clear success metric to reach a destination or score a goal. We introduce the state and action representation, as well as the reward and network architecture. We describe our training and testing using a simulator of high physical and hardware fidelity. Then we test the policies trained from simulation on real robots. Our results show that the learned skills achieve an overall better success rate at the expense of taking 0.29 seconds slower on average for all three skills. In the end, we show that our policies trained in simulation have good performance on real robots by directly transferring the policy.
BibTeX
@conference{Zhu-2019-122705,author = {Yifeng Zhu and Devin Schwab and Manuela Veloso},
title = {Learning Primitive Skills for Mobile Robots},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2019},
month = {May},
pages = {7597 - 7603},
}