Background Subtraction for Freely Moving Cameras - Robotics Institute Carnegie Mellon University

Background Subtraction for Freely Moving Cameras

Yaser Sheikh, Omar Javed, and Takeo Kanade
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 1219 - 1225, September, 2009

Abstract

Background subtraction algorithms define the background as parts of a scene that are at rest. Traditionally, these algorithms assume a stationary camera, and identify moving objects by detecting areas in a video that change over time. In this paper, we extend the concept of `subtracting' areas at rest to apply to video captured from a freely moving camera. We do not assume that the background is well-approximated by a plane or that the camera center remains stationary during motion. The method operates entirely using 2D image measurements without requiring an explicit 3D reconstruction of the scene. A sparse model of background is built by robustly estimating a compact trajectory basis from trajectories of salient features across the video, and the background is `subtracted' by removing trajectories that lie within the space spanned by the basis. Foreground and background appearance models are then built, and an optimal pixel-wise foreground/background labeling is obtained by efficiently maximizing a posterior function.

BibTeX

@conference{Sheikh-2009-122201,
author = {Yaser Sheikh and Omar Javed and Takeo Kanade},
title = {Background Subtraction for Freely Moving Cameras},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2009},
month = {September},
pages = {1219 - 1225},
}