Learning Background-Aware Correlation Filters for Visual Tracking - Robotics Institute Carnegie Mellon University

Learning Background-Aware Correlation Filters for Visual Tracking

H. Kiani Galoogahi, A. Fagg, and S. Lucey
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 1144 - 1152, October, 2017

Abstract

Correlation Filters (CFs) have recently demonstrated excellent performance in terms of rapidly tracking objects under challenging photometric and geometric variations. The strength of the approach comes from its ability to efficiently learn - on the fly - how the object is changing over time. A fundamental drawback to CFs, however, is that the background of the target is not modeled over time which can result in suboptimal performance. Recent tracking algorithms have suggested to resolve this drawback by either learning CFs from more discriminative deep features (e.g. DeepSRDCF [9] and CCOT [11]) or learning complex deep trackers (e.g. MDNet [28] and FCNT [33]). While such methods have been shown to work well, they suffer from high complexity: extracting deep features or applying deep tracking frameworks is very computationally expensive. This limits the real-time performance of such methods, even on high-end GPUs. This work proposes a Background-Aware CF based on hand-crafted features (HOG [6]) that can efficiently model how both the foreground and background of the object varies over time. Our approach, like conventional CFs, is extremely computationally efficient- and extensive experiments over multiple tracking benchmarks demonstrate the superior accuracy and real-time performance of our method compared to the state-of-the-art trackers.

BibTeX

@conference{Galoogahi-2017-121033,
author = {H. Kiani Galoogahi and A. Fagg and S. Lucey},
title = {Learning Background-Aware Correlation Filters for Visual Tracking},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2017},
month = {October},
pages = {1144 - 1152},
}