Domain Randomization for Scene-Specific Car Detection and Pose Estimation - Robotics Institute Carnegie Mellon University

Domain Randomization for Scene-Specific Car Detection and Pose Estimation

Rawal Khirodkar, Donghyun Yoo, and Kris M. Kitani
Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '19), pp. 1932 - 1940, January, 2019

Abstract

We address the issue of domain gap when making use of synthetic data to train a scene-specific object detector and pose estimator. While previous works have shown that the constraints of learning a scene-specific model can be leveraged to create geometrically and photometrically consistent synthetic data, care must be taken to design synthetic content which is as close as possible to the real-world data distribution. In this work, we propose to solve domain gap through the use of appearance randomization to generate a wide range of synthetic objects to span the space of realistic images for training. An ablation study of our results is presented to delineate the individual contribution of different components in the randomization process. We evaluate our method on VIRAT, UA-DETRAC, EPFL-Car datasets, where we demonstrate that using scene specific domain randomized synthetic data is better than fine-tuning off-the-shelf models on limited real data.

BibTeX

@conference{Khirodkar-2019-122791,
author = {Rawal Khirodkar and Donghyun Yoo and Kris M. Kitani},
title = {Domain Randomization for Scene-Specific Car Detection and Pose Estimation},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '19)},
year = {2019},
month = {January},
pages = {1932 - 1940},
}