2:00 pm to 12:00 am
Event Location: NSH 1507
Abstract: Surgeons increasingly need to perform complex operations on extremely small anatomy. Many existing and promising new surgeries are effective, but difficult or impossible to perform because humans lack the extraordinary control required at sub-millimeter scales. Using micromanipulators, surgeons gain higher positioning accuracy and additional dexterity as the instrument removes tremor and scales hand motions. While these aids are advantageous, they do not actively consider the goals or intentions of the operator and thus cannot provide context-specific behaviors, such as motion scaling around anatomical targets, prevention of unwanted contact with pre-defined tissue areas, compensation for moving anatomy, and other helpful task-dependent actions.
This thesis explores the fusion of visual information with micromanipulator control and enforces task-specific behaviors that respond in synergy with the surgeon’s intentions and motions throughout surgical procedures. By exploiting real-time microscope view observations, a-priori knowledge of surgical operations, and pre-operative data prepared before the surgery, we hypothesize that micromanipulators can employ individualized and targeted aids to further help the surgeon. Specifically, we propose a vision-based control framework of virtual fixtures for handheld micromanipulator robots that naturally incorporates tremor suppression and motion scaling. We develop real-time vision systems to track the surgeon and anatomy and design fast, new algorithms for analysis of the retina. Virtual fixtures constructed from visually tracked anatomy allows for complex task-specific behaviors that monitor the surgeon’s actions and react appropriately to cooperatively accomplish the procedure.
Particular focus is given to vitreoretinal surgery as a good choice for vision-based control because several new and promising surgical techniques in the eye depend on fine manipulations of tiny and delicate retinal structures. Experiments with Micron, the fully handheld micromanipulator developed in our lab, show that vision-based virtual fixtures significantly increase pointing precision by reducing positioning error during synthetic, but medically relevant hold-still and tracing tasks. To evaluate the proposed framework in realistic environments, we consider three demanding retinal procedures: membrane peeling, laser photocoagulation, and vessel cannulation. Preclinical trials on artificial phantoms, ex vivo, and in vivo animal models demonstrate that vision-based control of a micromanipulator significantly improves surgeon performance (p < 0.05).
Committee:
Cameron Riviere, Chair
George Kantor
George Stetten
Gregory Hager, Johns Hopkins University