Shared Autonomy via Hindsight Optimization
Abstract
In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly.
BibTeX
@conference{Javdani-2015-5991,author = {Shervin Javdani and Siddhartha Srinivasa and J. Andrew (Drew) Bagnell},
title = {Shared Autonomy via Hindsight Optimization},
booktitle = {Proceedings of Robotics: Science and Systems (RSS '15)},
year = {2015},
month = {July},
}