Blockwise coordinate descent schemes for efficient and effective dictionary learning - Robotics Institute Carnegie Mellon University

Blockwise coordinate descent schemes for efficient and effective dictionary learning

Bao-Di Liu, Yuxiong Wang, Bin Shen, Xue Li, Yu-Jin Zhang, and Yan-Jiang Wang
Journal Article, Neurocomputing, Vol. 178, pp. 25 - 35, February, 2016

Abstract

Sparse representation based dictionary learning, which is usually viewed as a method for rearranging the structure of the original data in order to make the energy compact over non-orthogonal and over-complete dictionary, is widely used in signal processing, pattern recognition, machine learning, statistics, and neuroscience. The current sparse representation framework decouples the optimization problem as two subproblems, i.e., alternate sparse coding and dictionary learning using different optimizers, treating elements in dictionary and codes separately. In this paper, we treat elements both in dictionary and codes homogenously. The original optimization is directly decoupled as several blockwise alternate subproblems rather than the above two. Hence, sparse coding and dictionary learning optimizations are unified together. More precisely, the variables involved in the optimization problem are partitioned into several suitable blocks with convexity preserved, making it possible to perform an exact blockwise coordinate descent. For each separable subproblem, based on the convexity and monotonic property of the parabolic function, a closed-form solution is obtained. The algorithm is thus simple, efficient, and effective. Experimental results show that our algorithm significantly accelerates the learning process. An application to image classification further demonstrates the efficiency of our proposed optimization strategy.

BibTeX

@article{Liu-2016-122557,
author = {Bao-Di Liu and Yuxiong Wang and Bin Shen and Xue Li and Yu-Jin Zhang and Yan-Jiang Wang},
title = {Blockwise coordinate descent schemes for efficient and effective dictionary learning},
journal = {Neurocomputing},
year = {2016},
month = {February},
volume = {178},
pages = {25 - 35},
}