Robust Regression - Robotics Institute Carnegie Mellon University

Robust Regression

Dong Huang, Ricardo Silveira Cabral, and Fernando De la Torre
Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 616 - 630, October, 2012

Abstract

Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. Regression methods typically map image features (X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing regression methods is that samples are directly projected onto a subspace and hence fail to account for outliers which are common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that in existing regression methods, and discriminative methods in general, the regressor variables X are assumed to be noise free. Due to this assumption, discriminative methods experience significant degrades in performance when gross outliers are present.

Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of Robust Regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, multi-label classification and head pose estimation from images. Several synthetic and real world examples are used to illustrate the benefits of RR.

BibTeX

@conference{Huang-2012-120905,
author = {Dong Huang and Ricardo Silveira Cabral and Fernando De la Torre},
title = {Robust Regression},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2012},
month = {October},
pages = {616 - 630},
}