Loading Events

PhD Thesis Proposal

September

16
Fri
Adam Villaflor PhD Student Robotics Institute,
Carnegie Mellon University
Friday, September 16
1:00 pm to 3:30 pm
GHC 4405
Combining Offline Reinforcement Learning with Stochastic Multi-Agent Planning for Autonomous Driving
Abstract:

Fully autonomous vehicles have the potential to greatly reduce vehicular accidents and revolutionize how people travel and how we transport goods. Many of the major challenges for autonomous driving systems emerge from the numerous traffic situations that require complex interactions with other agents. For the foreseeable future, autonomous vehicles will have to share the road with human-drivers and pedestrians, and thus cannot rely on centralized communication to address these interactive scenarios. Therefore, autonomous driving systems need to be able to negotiate and respond to unknown agents that exhibit uncertain behavior. To tackle these problems, most commercial autonomous driving stacks use a modular approach that splits perception, agent forecasting, and planning into separately engineered modules. By decomposing autonomous driving into smaller modules, it allows for simplifying abstractions and greater parallelization of the engineering effort.

However, fully separating prediction and planning makes it difficult to reason about how other vehicles will respond to the planned trajectory for the controlled ego-vehicle. Thus to maintain safety, many modular approaches have to be overly conservative when interacting with other agents. Ideally, we want autonomous vehicles to drive in a natural and confident manner, while still maintaining safety. We believe that to achieve this behavior we need 3 major components. First, we need an approach that unifies prediction and planning in a single probabilistic closed-loop planning framework. Second, we need to use a multi-agent formulation in combination with deep learning models that can scale to the complexities of real-world driving and effectively model the interactive multi-modal distributions of real-world traffic. Finally, we need approaches that can effectively search the space of potential multi-agent interactions across time efficiently in order to produce a suitable planned behavior. In this proposal, we will show our current progress in applying deep offline reinforcement learning to autonomous driving, and present future work to continue scaling deep learning approaches to more complicated and interactive autonomous driving problems.

 
Thesis Committee Members:
Jeff Schneider, Chair
John Dolan, Co-Chair
David Held
Philipp Krähenbühl (UT Austin)