Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates - Robotics Institute Carnegie Mellon University

Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates

Conference Paper, Proceedings of 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI '12), pp. 175 - 176, March, 2012

Abstract

As an early behavioral study of what non-verbal features a robot tourguide could use to analyze a crowd, personalize an interaction and maintain high levels of engagement, we analyze participant gaze statistics in response to a robot tour guide's deictic gestures. There were thirty-seven participants overall, split into nine groups of three to five people each. In groups with the lowest engagement levels aggregate gaze response to the robot's pointing gesture involved the fewest total glance shifts, least time spent looking at indicated object and no intra-participant gaze. Our diverse participants had overlapping engagement ratings within their group, and we found that a robot that tracks group rather than individual analytics could capture less noisy and often stronger trends relating gaze features to self-reported engagement scores. Thus we have found indications that aggregate group analysis captures more salient and accurate assessments of overall humans-robot interactions, even with lower resolution features.

BibTeX

@conference{Knight-2012-122294,
author = {Heather Knight and Reid Simmons},
title = {Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates},
booktitle = {Proceedings of 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI '12)},
year = {2012},
month = {March},
pages = {175 - 176},
}