Incorporating Background Invariance into Feature-Based Object Recognition - Robotics Institute Carnegie Mellon University

Incorporating Background Invariance into Feature-Based Object Recognition

Workshop Paper, 7th IEEE Workshops on Applications of Computer Vision (WACV/MOTION '05), pp. 37 - 44, 2005

Abstract

Current feature-based object recognition methods use information derived from local image patches. For robustness, features are engineered for invariance to various transformations, such as rotation, scaling, or affine warping. When patches overlap object boundaries, however, errors in both detection and matching will almost certainly occur due to inclusion of unwanted background pixels. This is common in real images, which often contain significant background clutter, objects which are not heavily textured, or objects which occupy a relatively small portion of the image. We suggest improvements to the popular Scale Invariant Feature Transform (SIFT) which incorporate local object boundary information. The resulting feature detection and descriptor creation processes are invariant to changes in background. We call this method the Background and Scale Invariant Feature Transform (BSIFT). We demonstrate BSIFT's superior performance in feature detection and matching on synthetic and natural images.

BibTeX

@workshop{Stein-2005-9099,
author = {Andrew Stein and Martial Hebert},
title = {Incorporating Background Invariance into Feature-Based Object Recognition},
booktitle = {Proceedings of 7th IEEE Workshops on Applications of Computer Vision (WACV/MOTION '05)},
year = {2005},
month = {January},
pages = {37 - 44},
keywords = {object recognition, features, SIFT, BSIFT},
}