Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration
Abstract
In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot’s capabilities and partially adapt to the robot, i.e., they might change their actions based on the observed outcomes and the robot’s actions, without replicating the robot’s policy. We present a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot’s actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot’s capabilities. The robot can then use this model to decide optimally between taking actions that reveal its capabilities to the human and taking the best action given the information that the human currently has. We prove that under certain observability assumptions, the optimal policy can be computed efficiently. We demonstrate through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.
BibTeX
@conference{Nikolaidis-2017-5456,author = {Stefanos Nikolaidis and Swaprava Nath and Ariel Procaccia and Siddhartha Srinivasa},
title = {Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration},
booktitle = {Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI '17)},
year = {2017},
month = {March},
pages = {323 - 331},
}