Long-Term Visual Route Following for Mobile Robots - Robotics Institute Carnegie Mellon University
Loading Events

RI Seminar

October

17
Fri
Tim Barfoot Associate Professor University of Toronto
Friday, October 17
3:30 pm to 4:30 pm
Long-Term Visual Route Following for Mobile Robots

Event Location: NHS 1305
Bio: Dr. Timothy Barfoot (Associate Professor, University of Toronto Institute for Aerospace Studies — UTIAS) holds the Canada Research Chair (Tier II) in Autonomous Space Robotics and works in the area of guidance, navigation, and control of mobile robots for space and terrestrial applications. He is interested in developing methods to allow mobile robots to operate over long periods of time in large-scale, unstructured, three-dimensional environments, using rich onboard sensing (e.g., cameras and laser rangefinders) and computation. Dr. Barfoot took up his position at UTIAS in May 2007, after spending four years at MDA Space Missions, where he developed autonomous vehicle navigation technologies for both planetary rovers and terrestrial applications such as underground mining. He is an Ontario Early Researcher Awardholder and a licensed Professional Engineer in the Province of Ontario. He sits on the editorial boards of the International Journal of Robotics Research and the Journal of Field Robotics. He is currently serving as the General Chair of Field and Service Robotics (FSR) 2015, which will be held in Toronto.

Abstract: In this talk I will describe a particular approach to visual route following for mobile robots that we have developed, called Visual Teach & Repeat (VT&R), and what I think the next steps are to make this system usable in real-world applications. We can think of VT&R as a simple form of simultaneous localization and mapping (without the loop closures) along with a path-tracking controller; the idea is to pilot a robot manually along a route once and then be able to repeat the route (in its own tracks) autonomously many, many times using only visual feedback. VT&R is useful for such applications as load delivery (mining), sample return (space exploration), and perimeter patrol (security). Despite having demonstrated this technique for over 500 km of driving on several different robots, there are still many challenges we must meet before we can say this technique is ready for real-world applications. These include (i) visual scene changes such as lighting, (ii) physical scene changes such as path obstructions, and (iii) vehicle changes such as tire wear. I’ll discuss our progress to date in addressing these issues and the next steps moving forward. There will be lots of videos.