Visibility optimization in manipulation tasks for a wheelchair-mounted robot arm
Abstract
Assistive robot arms provide people with upper motor disabilities a way to independently complete daily activities [1]. Research into shared autonomy investigates how to augment user teleoperation of a robot arm with intelligent robotic assistance to make assistive robots easier to use [2]. However, during assistance, shared autonomy systems may move the robot into positions that occlude important parts of the scene from the user. We address this problem by generating robot arm trajectories that are optimized to be minimally occlusive (Fig. 1) of the important parts of the scene. We extend past work on human-aware motion planning [3, 4] to include visibility. This is especially relevant for our target application of assisting wheelchair-bound users with upper limb mobility impairments, who have significant constraints on their viewpoint [5–7]. We face two challenges. The first is a tradeoff between accomplishing a task and minimizing occlusions, even as manipulating objects to accomplish the task will inevitably cause occlusions. For example, in Fig. 1(a) the user wants to grab the pitcher. A naı̈ve plan (Fig. 1(b)) would undesirably occlude the pitcher from view. Instead, we want to generate paths that minimize occlusion (Fig. 1(c)) [8–11]. To do so, we need a precise mathematical definition of what it means to occlude in this scenario, which is our second challenge.
BibTeX
@workshop{Holladay-2016-113254,author = {Rachel M. Holladay and Laura Herlant and Henny Admoni and Siddhartha S. Srinivasa},
title = {Visibility optimization in manipulation tasks for a wheelchair-mounted robot arm},
booktitle = {Proceedings of RO-MAN '16 2nd Workshop on Human-Oriented Approaches for Assistive and Rehabilitation Robotics (HUMORARR '16)},
year = {2016},
month = {August},
}