Augmented reality dialog interface for multimodal teleoperation
Abstract
We designed an augmented reality interface for dialog that enables the control of multimodal behaviors in telepresence robot applications. This interface, when paired with a telepresence robot, enables a single operator to accurately control and coordinate the robot's verbal and nonverbal behaviors. Depending on the complexity of the desired interaction, however, some applications might benefit from having multiple operators control different interaction modalities. As such, our interface can be used by either a single operator or pair of operators. In the paired-operator system, one operator controls verbal behaviors while the other controls nonverbal behaviors. A within-subjects user study was conducted to assess the usefulness and validity of our interface in both single and paired-operator setups. When faced with hard tasks, coordination between verbal and nonverbal behavior improves in the single-operator condition. Despite single operators being slower to produce verbal responses, verbal error rates were unaffected by our conditions. Finally, significantly improved presence measures such as mental immersion, sensory engagement, ability to view and understand the dialog partner, and degree of emotion occur for single operators that control both the verbal and nonverbal behaviors of the robot.
BibTeX
@conference{Pereira-2017-122476,author = {André Pereira and Elizabeth J. Carter and Iolanda Leite and John Mars and Jill Fain Lehman},
title = {Augmented reality dialog interface for multimodal teleoperation},
booktitle = {Proceedings of 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN '17)},
year = {2017},
month = {August},
pages = {764 - 771},
}