Predicting user intent through eye gaze for shared autonomy - Robotics Institute Carnegie Mellon University

Predicting user intent through eye gaze for shared autonomy

Henny Admoni and Siddhartha S. Srinivasa
Conference Paper, Proceedings of AAAI '16 Fall Symposium on Shared Autonomy in Research and Practice, pp. 298 - 303, November, 2016

Abstract

Shared autonomy combines user control of a robot with intelligent autonomous robot behavior to help people perform tasks more quickly and with less effort. Current shared autonomy frameworks primarily take direct user input, for example through a joystick, that directly controls the robot’s actions. However, indirect input, such as eye gaze, can be a useful source of information for revealing user intentions and future actions. For example, when people perform manipulation tasks, their gaze centers on the objects of interest before the corresponding movements even begin. This implicit information contained in eye gaze can be used to improve the goal prediction of a shared autonomy system, improving its overall assistive capability. In this paper, we describe how eye gaze behavior can be incorporated into shared autonomy. Building on previous work that represents user goals as latent states in a POMDP, we describe how gaze behavior can be used as observations to update the POMDP’s probability distributions over goal states, solving for the optimal action using hindsight optimization. We detail a pilot implementation that uses a head-mounted eye tracker to collect eye gaze data.

BibTeX

@conference{Admoni-2016-113253,
author = {Henny Admoni and Siddhartha S. Srinivasa},
title = {Predicting user intent through eye gaze for shared autonomy},
booktitle = {Proceedings of AAAI '16 Fall Symposium on Shared Autonomy in Research and Practice},
year = {2016},
month = {November},
pages = {298 - 303},
}