Mobile service robot state revealing through expressive lights: Formalism, design, and evaluation
Abstract
We consider mobile service robots that carry out tasks with, for, and around humans in their environments. Speech combined with on-screen display are common mechanisms for autonomous robots to communicate with humans, but such communication modalities may fail for mobile robots due to spatio-temporal limitations. To enable a better human understanding of the robot given its mobility and autonomous task performance, we introduce the use of lights to reveal the dynamic robot state. We contribute expressive lights as a primary modality for the robot to communicate to humans useful robot state information. Such lights are persistent, non-invasive, and visible at a distance, unlike other existing modalities. Current programmable light arrays provide a very large animation space, which we address by introducing a finite set of parametrized signal shapes while still maintaining the needed animation design flexibility. We present a formalism for light animation control and an architecture to map the representation of robot state to the parametrized light animation space. The mapping generalizes to multiple light strips and even other expression modalities. We demonstrate our approach on CoBot, a mobile multi-floor service robot, and evaluate its validity through several user studies. Our results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a desirable impact on a collaborative human–robot behavior.
BibTeX
@article{Baraka-2018-119192,author = {Kim Baraka and Manuela Veloso},
title = {Mobile service robot state revealing through expressive lights: Formalism, design, and evaluation},
journal = {International Journal of Social Robotics},
year = {2018},
month = {January},
volume = {10},
number = {1},
pages = {65 - 92},
keywords = {Human-robot interaction; Expressive lights; Mobile robots; Transparency; Explainability},
}