1:00 pm to 12:00 am
Event Location: Newell Simon Hall 1305
Abstract: Achieving robust and reliable operation even in complex unstructured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences between various actions, and the terrains they result in traversing. These definitions encode the programmed behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the simplest of circumstances.
This thesis proposes the application of machine learning techniques to automate the construction and tuning of preference models within complex mobile robot systems. Based on the concept of inverse optimal control, expert examples will be utilized to learn models that generalize from demonstrated preferences. Learning through demonstration approaches will be developed that offer the possibility of significantly reducing the amount human interaction necessary to tune a system, while increasing its final performance. The performance of these techniques will be validated through extensive robot testing, while human interaction experiments will demonstrate the achieved time savings.
Committee:Tony Stentz, Chair
Drew Bagnell
David Wettergreen
Larry Matthies, Jet Propulsion Laboratory