Annotation of utterances for conversational nonverbal behaviors
Abstract
Nonverbal behaviors play an important role in communication for both humans and social robots. However, adding contextually appropriate animations by hand is time consuming and does not scale well. Previous researchers have developed automated systems for inserting animations based on utterance text, yet these systems lack human understanding of social context and are still being improved. This work proposes a middle ground where untrained human workers label semantic information, which is input to an automatic system to produce appropriate gestures. To test this approach, untrained workers from Mechanical Turk labeled semantic information, specifically emotion and emphasis, for each utterance, which was used to automatically add animations. Videos of a robot performing the animated dialogue were rated by a second set of participants. Results showed untrained workers are capable of providing reasonable labeling of semantic information and that emotional expressions derived from the labels were rated more highly than control videos. More study is needed to determine the effects of emphasis labels.
BibTeX
@conference{Funkhouser-2016-122280,author = {Allison Funkhouser and Reid Simmons},
title = {Annotation of utterances for conversational nonverbal behaviors},
booktitle = {Proceedings of 8th International Conference on Social Robotics (ICSR '16)},
year = {2016},
month = {November},
}