GNSS-denied Ground Vehicle Localization for Off-road Environments with Bird’s-eye-view Synthesis
Abstract:
Global localization is essential for the smooth navigation of autonomous vehicles. To obtain accurate vehicle states, on-board localization systems typically rely on Global Navigation Satellite System (GNSS) modules for consistent and reliable global positioning. However, GNSS signals can be obstructed by natural or artificial barriers, leading to temporary system failures and degraded state estimation. On the other hand, off-road driving presents unique challenges for ground vehicles due to irregular terrain, leading to unstable surfaces for traversal and affects state estimation accuracy. Additionally, visual odometry performance may suffer due to the lack of distinct and reliable features for accurate state estimation. To address these challenges, we propose a novel learning-based method that synthesizes a local bird’s-eye-view (BEV) image of the surrounding area by aggregating visual features from camera images. The proposed model combines a deformable attention-structured network with an image rendering head to generate top-down BEV images. The synthesized images are subsequently matched with an aerial map for cross-view vehicle registration in GNSS-denied off-road environments. Extensive real-world experimentation validates our method’s advancement over existing GNSS-denied visual localization methods, demonstrating notable enhancements in both localization accuracy and registration frequency. Our method effectively reduces visual inertial odometry (VIO) drifts when integrated with an on-board VIO system via factor graph optimization.
Committee:
Prof. Michael Kaess (chair)
Dr. Wenshan Wang
Easton Potokar
Prof. Michael Kaess (chair)
Dr. Wenshan Wang
Easton Potokar