Abstract:
I will begin by describing our work on visual servoing a manipulator and localizing objects using a robot-mounted suite of vision and vision-based tactile sensors, our results, algorithms used, and lessons learned. We show that by collocating tactile, and global (e.g. an RGB(D) camera) sensors, our setup can perform better than using each type of sensor in isolation. Following which, I will describe some limitations of this work which led us to our current research where we explore how manipulation tasks can be aided by controlling the illumination of a robot’s workspace and camera positions inside the workspace. This approach can capture high quality information about object geometry (normals, surface information, and depth discontinuities) which can be used to select areas to grasp, pick up thin objects, reason about deforming objects and localize objects to learn how they interact with the workspace (e.g. how they deform when force is applied).
Committee:
Prof. Christopher G. Atkeson (Chair)
Prof. Wenzhen Yuan
Prof. Oliver Kroemer
Leonid Keselman