Vision-Based Control of a Handheld Micromanipulator for Robot-Assisted Retinal Surgery - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Proposal

April

25
Mon
Brian C. Becker Carnegie Mellon University
Monday, April 25
3:00 pm to 12:00 am
Vision-Based Control of a Handheld Micromanipulator for Robot-Assisted Retinal Surgery

Event Location: GHC 8102

Abstract: Surgeons increasingly need to perform complex operations on extremely small anatomy. Many promising new surgeries are effective, but difficult or impossible to perform because humans lack the extraordinary control required at sub-mm scales. Using micromanipulators, surgeons gain better positioning accuracy and additional dexterity as the instrument smoothes tremor and scales hand motions. While these aids are advantageous, they do not actively consider the goals or intentions of the operator and thus cannot provide context-specific behaviors, such as motion scaling around anatomical targets, prevention of unwanted contact with pre-defined tissue areas, and other helpful task-dependent actions.


This thesis explores the fusion of visual information with micromanipulator control and builds a framework of task-specific behaviors that respond synergistically with surgeon’s intentions and motions throughout surgical procedures. By exploiting real-time microscope view observations, a-priori knowledge of surgical procedures, and pre-operative data used by the surgeon while preparing for the surgery, we hypothesize that the micromanipulator can better understand the goals of a given procedure and deploy individualized aids in addition to tremor suppression to further help the surgeon. Specifically, we propose a vision-based control framework of modular virtual fixtures for handheld micromanipulator robots. Virtual fixtures include constraints such as “maintain tip position”, “avoid these areas”, “follow a trajectory”, and “keep an orientation” whose parameters are derived from visual information, either pre-operatively or in real-time, and are enforced by the control system. Combining individual modules allows for complex task-specific behaviors that monitor the surgeon’s actions relative to the anatomy and react appropriately to cooperatively accomplish the surgical procedure.


Particular focus is given to vitreoretinal surgery as a testbed for vision-based control because several new and promising surgical techniques in the eye depend on fine manipulations of delicate retinal structures. Preliminary experiments with Micron, the micromanipulator developed in our lab, demonstrate that vision-based control can improve accuracy and increase usability for difficult retinal operations, such as laser photocoagulation and vessel cannulation. An initial framework for virtual fixtures has been developed and shown to significantly reduce error in synthetic tests if the structure of the surgeon’s motions is known. Proposed work includes formalizing the virtual fixtures framework, incorporating elements from model predictive control, improving 3D vision imaging of retinal structures, and conducting experiments with an experienced retinal surgeon. Results from experiments with ex vivo and in vivo tissue for selected retinal surgical procedures will validate our approach.

Committee:Cameron N. Riviere, Chair

George A. Kantor

George D. Stetten

Gregory D. Hager, Johns Hopkins University