Abstract:
In the field of human-robot interaction (HRI), integration of robots into social settings, such as healthcare and education, is gaining traction. Robots that provide individualized support to improve human performance and subjective experience will generally be more successful in these domains. Robots should personalize their interactions, be aware of the contextual nuances surrounding their behavior, and effectively understand and generate nonverbal cues (as humans’ perceptions and responses are heavily influenced by nonverbal behavior). They should also consider factors such as personality traits, the physical environment, and emotional states to provide tailored, context-aware assistance and support during the interactions. This thesis explores personalized context-aware multimodal robot feedback, focusing on affective nonverbal behavior.
We first consider the problem of estimating context, specifically modeling key aspects of the human state. We predict engagement-related events in an educational activity before the end of that activity, which could allow the robot to provide feedback early enough to improve the human’s experience. We then explore generating nonverbal affective robot behavior by correlating a simulated robot’s movements with displayed emotion. We develop a user study to show that matching the robot’s conveyed emotion with a matching affective movement has a positive impact on the human’s performance in a sorting game. Next we design a physical robot exercise coach as a platform where we can estimate context (exercise performance, fatigue level, etc.). With a user study, we examine the changes in human perception of and performances with different robot feedback styles. This provides us a basis on which to begin tailoring feedback styles to the individual. Finally, we develop a personalized context-aware robot using a contextual bandit approach to dynamically adapt the robot’s feedback style to optimize the human’s performance, learning over time which style to use and when. This brings together all the work presented in this thesis and aims to create a holistic framework for generating personalized context-aware multimodal feedback that positively impacts the interaction with the human.
Thesis Committee Members:
Reid Simmons, Chair
Henny Admoni
Jean Oh
Sonia Chernova, Georgia Institute of Technology