Vision-based Robot Localization by Ground to Satellite Matching in GPS-denied Situations - Robotics Institute Carnegie Mellon University

Vision-based Robot Localization by Ground to Satellite Matching in GPS-denied Situations

Anirudh Viswanathan, Bernardo R. Pires, and Daniel Huber
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 192 - 198, September, 2014

Abstract

This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.

Notes
DOI: 10.1109/IROS.2014.6942560

BibTeX

@conference{Viswanathan-2014-6013,
author = {Anirudh Viswanathan and Bernardo R. Pires and Daniel Huber},
title = {Vision-based Robot Localization by Ground to Satellite Matching in GPS-denied Situations},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2014},
month = {September},
pages = {192 - 198},
}