Data-driven model of nonverbal behavior for socially assistive human-robot interactions
Conference Paper, Proceedings of 16th International Conference on Multimodal Interaction (ICMI '14), pp. 196 - 199, November, 2014
Abstract
Socially assistive robotics (SAR) aims to develop robots that help people through interactions that are inherently social, such as tutoring and coaching. For these interactions to be effective, socially assistive robots must be able to recognize and use nonverbal social cues like eye gaze and gesture. In this paper, we present a preliminary model for nonverbal robot behavior in a tutoring application. Using empirical data from teachers and students in human-human tutoring interactions, the model can be both predictive (recognizing the context of new nonverbal behaviors) and generative (creating new robot nonverbal behaviors based on a desired context) using the same underlying data representation.
BibTeX
@conference{Admoni-2014-113239,author = {Henny Admoni and Brian Scassellati},
title = {Data-driven model of nonverbal behavior for socially assistive human-robot interactions},
booktitle = {Proceedings of 16th International Conference on Multimodal Interaction (ICMI '14)},
year = {2014},
month = {November},
pages = {196 - 199},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.