Model-Mediated Telerobotics: Can we get to Telepresence? - Robotics Institute Carnegie Mellon University
Loading Events

RI Seminar

July

10
Tue
Gunter Niemeyer Consulting Professor of Mechanical Engineering at Stanford University Senior Research Scientist at Willow Garage, Inc.
Tuesday, July 10
12:00 pm to 1:00 am
Model-Mediated Telerobotics: Can we get to Telepresence?

Event Location: NSH 1305
Bio: Dr. Günter Niemeyer is a senior research scientist at Willow Garage Inc. and a consulting professor of Mechanical Engineering at Stanford University. His research examines physical human-robotic interactions and interaction dynamics, force sensitivity and feedback, teleoperation with and without communication delays, and haptic interfaces. This involves efforts ranging from realtime motor and robot control to user interface design. Dr. Niemeyer received his M.S. and Ph.D. from MIT in the areas of adaptive robot control and bilateral teleoperation, introducing the concept of wave variables. He also held a postdoctoral research position at MIT developing surgical robotics. In 1997 he joined Intuitive Surgical Inc., where he helped create the daVinci Minimally Invasive Surgical System. This telerobotic system enables surgeons to perform complex procedures through small (5 to 10mm) incisions using an immersive interface and is in use at hundreds of hospitals worldwide. He joined the Stanford faculty in the Fall
of 2001, directing the Telerobotics Lab and teaching dynamics, controls, and telerobotics. He has been a member of the Willow Garage research group since 2009.

Abstract: From the very beginning of telerobotics, we have envisioned using robots to extend our reach and abilities, empowering us to do things we could not ordinarily do. Fifty years later, the success of telesurgery exemplifies this beautiful synergy of human intelligence with robotic execution. But whether in surgery or elsewhere, when we interact with the world we want to feel the world. So the goal of telepresence remains crucial and challenging: fully immersing the operator into the remote environment,
allowing them to feel as well as see everything.
I will share some experiences pursuing this goal. Traditional wisdom suggests improving transparency: feeding sensor information and especially force signals to the operator as quickly and directly as possible. Indeed, several advances have helped tighten this connection, from amplifying high frequency content to handling delays. But the brute force nature of these approaches leaves us in a bind between performance and stability.

These experiences have led me to believe that we need to seek new paths. I will show some work using simple models to mediate the user-world connection. We project the user’s ever-changing impedance directly to the slave robot, preparing the system for imminent events. And we observe an environment model in realtime to display to the operator, predicting interactions possibly even before they happen. While these strategies defy tradition and loosen the controlled connection, they lead to better telepresence and practical systems. They also suggest ways to leverage autonomy, incorporating smarter local robot behaviors.