Control Input and Natural Gaze for Goal Prediction in Shared Control - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

June

9
Thu
Reuben Aronson Robotics Institute,
Carnegie Mellon University
Thursday, June 9
12:00 pm to 2:00 pm
GHC 4405
Control Input and Natural Gaze for Goal Prediction in Shared Control

Abstract:
Teleoperated systems are used widely in deployed robots today, for such tasks as space exploration, disaster recovery, or assisted manipulation. However, teleoperated systems are difficult to control, especially when performing high-dimensional, contact-rich tasks like manipulation. One approach to ease teleoperated manipulation is shared control; this strategy combines the user’s direct control input with an autonomous plan to achieve the user’s goal, thereby speeding up tasks and reducing user effort. To do so, the system needs a prediction of the user’s goal.

One common approach derives this goal prediction from the user’s control input itself, as it is already available to the system. Prior work using this prediction source in baseline tasks validates the usefulness of shared control. In this thesis, we prove that the effectiveness of control input for goal prediction is a consequence of how optimal users provide control input. When the user’s control input is restricted, however, the assistance may be suboptimal.

To improve on this performance, we turn to another source for goal information: natural gaze. People’s natural, unconstrained eye gaze behavior reveals information about their immediate goals and their future tasks. The accuracy and timing of these predictions are different than those provided by the control input pipeline, making it a promising additional source. To effectively use natural gaze for goal predictions and to combine it with control input, we analyze the behavior of each signal and evaluate them in the context of the full assistive system.

In this thesis, we show that control input and eye gaze complement each other for goal prediction during shared control. Control input gives local information about the user’s goal, making it particularly effective in simple tasks when people can act optimally but limiting its performance in more complex tasks. On the other hand, eye gaze provides global information about task intentions early, but it does not do so as reliably.

We first formalize evaluation criteria for goal prediction sources and examine how goal prediction using control input, the current state of the art, affects the assistance. We show that the autonomous system does not always need to know the user’s specific goal to make progress in the task. One key advantage of control input as a prediction source is that when the user’s control is noisily optimal, the parts of the task where the autonomous system requires goal information coincide with those where the user’s control input is likely to provide that information. However, when user input is restricted so people cannot act optimally, the user’s control input is no longer as informative about the goal; this restriction occurs, for example, when using low-degree of freedom input devices. While the goal information from control input is still reliable when it is available, it may not come early enough in the task, so alternative goal prediction sources may help.

Next, we analyze natural eye gaze as a source of global information to supplement the goal prediction given by control input. We collect a data set of natural gaze during a teleoperated manipulation task and show that while people do look at their goals, more often they look at the robot end-effector, and sometimes they complete tasks without ever looking at the goals. From this analysis, we develop a contextual representation for gaze behavior and use it to predict the user’s goal; this signal can give predictions earlier than are available from the control input, but the variability of people’s gaze behavior limits the reliability of the signal when used on its own.

Finally, we integrate both signals into a system for online assisted manipulation, and we evaluate the model for the usefulness of each signal in a task that restricts the user’s input and requires multidimensional assistance. When using control input for goal prediction, the system reliably provides some assistance, but cannot do so in all dimensions. When we incorporate gaze-based goal prediction, an earlier goal prediction from gaze enables the assistance to act in all dimensions and increases user task performance. However, the assistance using only gaze performs worse than either other condition, so it benefits from the reliability of another goal prediction source like control input.

Developing a model for how different goal prediction sources contribute to assistance quality during shared control enables this assistance strategy to work in more complex situations, such as ones with restricted user input or multipart assistance. The work in this thesis can help ground future explorations of input modalities for goal prediction. With this greater understanding of shared control for effective assistance, this work helps to bring it closer to real-world applications.

Thesis Committee Members:
Henny Admoni, Chair
Artur Dubrawski
Nancy Pollard
Brenna Argall, Northwestern University

More Information