Contextual Priming and Feedback for Faster R-CNN - Robotics Institute Carnegie Mellon University

Contextual Priming and Feedback for Faster R-CNN

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 330 - 348, October, 2016

Abstract

The field of object detection has seen dramatic performance improvements in the last few years. Most of these gains are attributed to bottom-up, feedforward ConvNet frameworks. However, in case of humans, top-down information, context and feedback play an important role in doing object detection. This paper investigates how we can incorporate top-down information and feedback in the state-of-the-art Faster R-CNN framework. Specifically, we propose to: (a) augment Faster RCNN with a semantic segmentation network; (b) use segmentation for top-down contextual priming; (c) use segmentation to provide top-down iterative feedback using two stage training. Our results indicate that all three contributions improve the performance on object detection, semantic segmentation and region proposal generation.

BibTeX

@conference{Shrivastava-2016-113334,
author = {Abhinav Shrivastava and Abhinav Gupta},
title = {Contextual Priming and Feedback for Faster R-CNN},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2016},
month = {October},
pages = {330 - 348},
}