Spatial references and perspective in natural language instructions for collaborative manipulation
Abstract
As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans. We then focus our analysis on the clarity of instructions with respect to perspective taking and spatial references. We find that only about 42% of instructions contain perspective-independent spatial references. There is a strong correlation between participants’ accuracy in executing instructions and the perspectives that the instructions are given in, as well between accuracy and the number of spatial relations that were required for the instruction. We conclude that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and we provide suggestions for automatic generation of spatial references.
BibTeX
@conference{Li-2016-5581,author = {Shen Li and Rosario Scalise and Henny Admoni and Siddhartha Srinivasa and Stephanie Rosenthal},
title = {Spatial references and perspective in natural language instructions for collaborative manipulation},
booktitle = {Proceedings of 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN '16)},
year = {2016},
month = {August},
editor = {IEEE},
pages = {44 - 51},
}