Learning strategies in table tennis using inverse reinforcement learning - Robotics Institute Carnegie Mellon University

Learning strategies in table tennis using inverse reinforcement learning

Katharina Mulling, Abdeslam Boularias, Betty Mohler, Bernhard Scholkopf, and Jan Peters
Journal Article, Biological Cybernetics, Vol. 108, No. 5, pp. 603 - 619, October, 2014

Abstract

Learning a complex task such as table tennis is a challenging problem for both robots and humans. Even after acquiring the necessary motor skills, a strategy is needed to choose where and how to return the ball to the opponent's court in order to win the game. The data-driven identification of basic strategies in interactive tasks, such as table tennis, is a largely unexplored problem. In this paper, we suggest a computational model for representing and inferring strategies, based on a Markov decision problem, where the reward function models the goal of the task as well as the strategic information. We show how this reward function can be discovered from demonstrations of table tennis matches using model-free inverse reinforcement learning. The resulting framework allows to identify basic elements on which the selection of striking movements is based. We tested our approach on data collected from players with different playing styles and under different playing conditions. The estimated reward function was able to capture expert-specific strategic information that sufficed to distinguish the expert among players with different skill levels as well as different playing styles.

BibTeX

@article{Mulling-2014-7857,
author = {Katharina Mulling and Abdeslam Boularias and Betty Mohler and Bernhard Scholkopf and Jan Peters},
title = {Learning strategies in table tennis using inverse reinforcement learning},
journal = {Biological Cybernetics},
year = {2014},
month = {October},
volume = {108},
number = {5},
pages = {603 - 619},
keywords = {Computational models of decision processes, Table tennis, Inverse reinforcement learning},
}