Camera-based Semantic Enhanced Vehicle Segmentation for Planar LIDAR
Abstract
Vehicle segmentation is an important step in perception for autonomous driving vehicles, providing object-level environmental understanding. Its performance directly affects other functions in the autonomous driving car, including Decision-Making and Trajectory Planning. However, this task is challenging for planar LIDAR due to its limited vertical field of view (FOV) and quality of points. In addition, directly estimating 3D location, dimensions and heading of vehicles from an image is difficult due to the limited depth information of a monocular camera. We propose a method that fuses a vision-based instance segmentation algorithm and LIDAR-based segmentation algorithm to achieve an accurate 2D bird’s-eye view object segmentation. This method combines the advantages of both camera and LIDAR sensor: the camera helps to prevent over-segmentation in LIDAR, and LIDAR segmentation removes false positive areas in the interest regions in the vision results. A modified T-linkage RANSAC is applied to further remove outliers. A better segmentation also results in a better orientation estimation. We achieved a promising improvement in average absolute heading error and 2D IOU on both a reduced-resolution KITTI dataset and our Cadillac SRX planar LIDAR dataset.
BibTeX
@conference{Fu-2018-113464,author = {Chen Fu and Peiyun Hu and Chiyu Dong and Christoph Mertz and John M. Dolan},
title = {Camera-based Semantic Enhanced Vehicle Segmentation for Planar LIDAR},
booktitle = {Proceedings of IEEE Intelligent Transportation Systems Conference (ITSC '18)},
year = {2018},
month = {November},
pages = {3805 - 3810},
keywords = {autonomous driving, perception, object detection, segmentation, LIDAR},
}