Human Perception of Robot Failure and Explanation During a Pick-and-Place Task
Abstract: In recent years, researchers have extensively used non-verbal gestures, such as head and arm movements, to express the robot’s intentions and capabilities to humans. Inspired by past research, we investigated how different explanation modalities can aid human understanding and perception of how robots communicate failures and provide explanations during block pick-and-place tasks. Through an in-person experiment, we studied four modes of explanations: Head, Head & Arm, Head & Image Projection, and Head & Speech. They were used to explain four types of failures: Out Of Reach, Object Size, Grasp Failure, and Perception Failure. We found that speech explanations were preferred to non-verbal cues in terms of level of expectedness from humans and similarity to humans. Additionally, projection had a comparable effect in explanation compared to other non-verbal modules. The findings also suggested that in-person and online studies can produce consistent results. The talk will also include a preview of recent work extending the findings of this study.
Committee:
Prof. Aaron Steinfeld (advisor) (RI)
Prof. Henny Admoni (RI)
Prof. Nikolas Martelaro (HCII)
Roshni Kaushik (RI PhD Student)