Activity Recognition in Restaurants to Address Underlying Needs: A Case Study
Abstract
Enabling robots to identify when humans need assistance is key to being able to provide help that is both proactive and efficient. This challenge is particularly difficult for humans eating a meal in a restaurant, a context which is dense with interlaced social elements such as conversation in addition to functional tasks such as eating. We investigated the challenge of identifying human dining activities from singleviewpoint footage by collecting and annotating the individual activities of five two-person meals. From this process, we found that addressing the question of identifying meal phases and overall neediness requires identifying an underlying group state for the table as a whole. We report on the individual activities and group states, as well as the interdependencies between these factors that can be leveraged to both provide and measure effective robotic restaurant service. In addition to the insights revealed by this dataset, we describe preliminary attempts to create an automated classification system for these activities.
BibTeX
@conference{Taylor-2022-133559,author = {Ada V. Taylor and Michael Huang and Roman Kaufman and Henny Admoni},
title = {Activity Recognition in Restaurants to Address Underlying Needs: A Case Study},
booktitle = {Proceedings of 31st IEEE International Conference on Robot & Human Interactive Communication},
year = {2022},
month = {September},
keywords = {activity recognition, group activity recognition, robotics},
}