Zero shot transfer learning for robot soccer
Abstract
We present a method for doing zero-shot transfer of multi-agent policies as the number of teammates, opponents, and environment size varies. We apply our approach to RoboCup inspired test domains, where it is necessary for policies to adapt to changing numbers of robots due to in-game breakages. We introduce the concept of encoding not only the states as an image, but also the action space as a multi-channel image, which allows the state and action size to remain fixed across team size changes. We also introduce Fully Convolutional Q-Networks, which represent Q-functions in this space using Fully Convolutional Networks. We present results for zero-shot transfer of these policies across team sizes and field sizes, showing that performance remains consistent as both change.
BibTeX
@conference{Schwab-2018-122717,author = {Devin Schwab and Yifeng Zhu and Manuela Veloso},
title = {Zero shot transfer learning for robot soccer},
booktitle = {Proceedings of International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '18)},
year = {2018},
month = {July},
pages = {2070 - 2072},
}