Unsupervised Patch-based Context from Millions of Images
Tech. Report, CMU-RI-TR-11-38, Robotics Institute, Carnegie Mellon University, December, 2011
Abstract
The amount of labeled training data required for image interpretation tasks is a major drawback of current methods. How can we use the gigantic collection of unlabeled images available on the web to aid these tasks? In this paper, we present a simple approach based on the notion of patch-based context to extract useful priors for regions within a query image from a large collection of (6 million) unlabeled images. This contextual prior over image classes acts as a non-redundant complimentary source of knowledge that helps in disambiguating the confusions within the predictions of local region-level features. We demonstrate our approach on the challenging tasks of region classification and surface layout estimation.
BibTeX
@techreport{Divvala-2011-7413,author = {Santosh Kumar Divvala and Alexei A. Efros and Martial Hebert and Svetlana Lazebnik},
title = {Unsupervised Patch-based Context from Millions of Images},
year = {2011},
month = {December},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-11-38},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.