Approximate Grassmannian Intersections: Subspace-Valued Subspace Learning
Abstract
Subspace learning is one of the most foundational tasks in computer vision with applications ranging from dimensionality reduction to data denoising. As geometric objects, subspaces have also been successfully used for efficiently representing certain types of invariant data. However, methods for subspace learning from subspace-valued data have been notably absent due to incompatibilities with standard problem formulations. To fill this void, we introduce Approximate Grassmannian Intersections (AGI), a novel geometric interpretation of subspace learning posed as finding the approximate intersection of constraint sets on a Grassmann manifold. Our approach can naturally be applied to input subspaces of varying dimension while reducing to standard subspace learning in the case of vector-valued data. Despite the nonconvexity of our problem, its globally-optimal solution can be found using a singular value decomposition. Furthermore, we also propose an efficient, general optimization approach that can incorporate additional constraints to encourage properties such as robustness. Alongside standard subspace applications, AGI also enables the novel task of transfer learning via subspace completion. We evaluate our approach on a variety of applications, demonstrating improved invariance and generalization over vector-valued alternatives.
BibTeX
@conference{Murdock-2017-120879,author = {C. Murdock and F. De la Torre},
title = {Approximate Grassmannian Intersections: Subspace-Valued Subspace Learning},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2017},
month = {October},
pages = {4318 - 4326},
}