Learning object models from few examples - Robotics Institute Carnegie Mellon University

Learning object models from few examples

Conference Paper, Proceedings of SPIE Unmanned Systems Technology XVIII, Vol. 9837, May, 2016

Abstract

Current computer vision systems rely primarily on fixed models learned in a supervised fashion, i.e., with extensive manually labelled data. This is appropriate in scenarios in which the information about all the possible visual queries can be anticipated in advance, but it does not scale to scenarios in which new objects need to be added during the operation of the system, as in dynamic interaction with UGVs. For example, the user might have found a new type of object of interest, e.g., a particular vehicle, which needs to be added to the system right away. The supervised approach is not practical to acquire extensive data and to annotate it. In this paper, we describe techniques for rapidly updating or creating models using sparsely labelled data. The techniques address scenarios in which only a few annotated training samples are available and need to be used to generate models suitable for recognition. These approaches are crucial for on-the-fly insertion of models by users and on-line learning.

BibTeX

@conference{Misra-2016-122555,
author = {Ishan Misra and Yuxiong Wang and Martial Hebert},
title = {Learning object models from few examples},
booktitle = {Proceedings of SPIE Unmanned Systems Technology XVIII},
year = {2016},
month = {May},
volume = {9837},
}