Learning-based Lane Following and Changing Behaviors for Autonomous Vehicle
Abstract
This thesis explores learning-based methods in generating human-like lane following and changing behaviors in on-road autonomous driving. We summarize our main contributions: 1) derive an efficient vision-based end-to-end learning system for on-road driving; 2) propose a novel attention-based learning architecture with hierarchical action space to obtain lane changing behavior using a deep reinforcement learning algorithm; 3) use LSTM to make vehicle's trajectory prediction with the demonstration of human driving trajectories. We first propose an end-to-end imitation learning algorithm to teach the car how to drive on-road with visual input. The elementary principle is to construct a neural network that maps from image input to steering angle and acceleration. To improve the maneuver's stability and boost the training efficiency, we apply transfer learning from other tasks, use LSTM to add temporal information, supplement with segmentation results and add sensor fusion. We evaluate our model in the Udacity simulator and obtain smooth driving performance on unseen curvy maps. Later, we extend learning lane following task to the lane changing task by using deep reinforcement learning. This approach avoids direct human supervision in a model-free fashion, easing the effort of extensive annotated training data. The contribution here is that we formulate the lane change behavior as a hierarchical action and we propose a model to solve deep reinforcement learning in this high-dimensional, structured space. In the meantime, we explore the attention mechanism in deep reinforcement learning, and the observed behavior is improved after applying spatial and temporal attention. The overall algorithm is tested and evaluated in the TORCS platform. Finally, we fulfill the task of trajectory prediction in on-road driving. The aim is to discover when and how people would make the decision of lane changing. We divide the task into predicting the driver's discrete intention and forecasting the subsequent continuous trajectory. We solve this sequential prediction task with LSTM and further extend the model by capturing the surrounding environment information. We compare and evaluate our prediction results with real human driving trajectories in the NGSIM dataset.
BibTeX
@mastersthesis{Chen-2018-106042,author = {Yilun Chen},
title = {Learning-based Lane Following and Changing Behaviors for Autonomous Vehicle},
year = {2018},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-18-26},
keywords = {Autonomous Driving, Deep Learning, Deep Reinforcement Learning, Lane Change Behavior, End-to-end Learning, Vehicle Trajectory Prediction, Driver Intention Prediction},
}