Vision and Improved Learned-Trajectory Replay for Assistive-Feeding and Food-Plating Robots
Abstract
Food manipulation offers an interesting frontier for robotics research because of the direct application of this research to real-world problems and the challenges involved in robust manipulation of deformable food items. In this work, we focus on the challenges associated with robots manipulating food for assistive feeding and meal preparation. This work focuses on how we can teach robots visual perception of the objects to manipulate, create error recovery and feedback systems, and improve on kinesthetic teaching of manipulation trajectories. This work includes several complete implementations of food manipulation robots for feeding and food plating on several robot platforms: a SoftBank Robotics Pepper, a Kinova MICO, a Niryo One, and a UR5.
BibTeX
@mastersthesis{Rhodes-2019-117062,author = {Travers Rhodes},
title = {Vision and Improved Learned-Trajectory Replay for Assistive-Feeding and Food-Plating Robots},
year = {2019},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-55},
keywords = {Assistive Robotics, Manipulation, Food},
}