Demo: Toward a data-driven generative behavior model for human-robot interaction
Abstract
Socially assistive robots are designed to help people through interactions that are inherently social, such as tutoring, coaching, and therapy. Because they operate in social environments, these robots must be programmed to recognize, process, and communicate social cues used by people. For example, non-verbal behaviors like eye gaze and gesture can provide significant communication in social interactions. However, identifying the correct non-verbal behavior to perform in a given context is a non-trivial problem for social robotics. One approach for designing robot behaviors is data driven, that is, reliant on actual observations of human behavior rather than pre-coded heuristics. This approach involves collecting data from natural human-human interactions, and then training a model based on that data. From this model, we can begin to generate non-verbal robot behaviors for known contexts, as well as identify the context given observations of new non-verbal behaviors. In this talk, I outline my current research designing data-driven generative behavior models for tutoring tasks. I also touch on the challenges of real-world robotics and how those challenges overlap with those faced by mobile augmented reality systems.
BibTeX
@workshop{Admoni-2014-113258,author = {Henny Admoni},
title = {Demo: Toward a data-driven generative behavior model for human-robot interaction},
booktitle = {Proceedings of MARS '14 Workshop on Mobile augmented reality and robotic technology-based systems},
year = {2014},
month = {June},
pages = {19 - 20},
}