HARMONIC: A Multimodal Data Set of Assistive Human-robot Collaboration - Robotics Institute Carnegie Mellon University

HARMONIC: A Multimodal Data Set of Assistive Human-robot Collaboration

Benjamin A. Newman, Reuben M. Aronson, Siddhartha S. Srinivasa, Kris Kitani, and Henny Admoni
Journal Article, International Journal of Robotics Research, December, 2021

Abstract

We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.

BibTeX

@article{Newman-2021-113224,
author = {Benjamin A. Newman and Reuben M. Aronson and Siddhartha S. Srinivasa and Kris Kitani and Henny Admoni},
title = {HARMONIC: A Multimodal Data Set of Assistive Human-robot Collaboration},
journal = {International Journal of Robotics Research},
year = {2021},
month = {December},
}