Subgradient Methods for Structured Prediction
Abstract
Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, and analyze the theoretical convergence, generalization, and robustness properties of the resulting techniques. These algorithms are are simple, memory efficient, fast to converge, and have small regret in the online setting. We also investigate a novel convex regression formulation of structured learning. Finally, we demonstrate the benefits of the subgradient approach on three structured prediction problems.
BibTeX
@conference{Ratliff-2007-9663,author = {Nathan Ratliff and J. Andrew (Drew) Bagnell and Martin Zinkevich},
title = {Subgradient Methods for Structured Prediction},
booktitle = {Proceedings of 11th International Conference on Artificial Intelligence and Statistics (AISTATS '07)},
year = {2007},
month = {March},
pages = {380 - 387},
keywords = {structured prediction, optical character recognition, subgradient methods, convex optimization, value function approximation, imitation learning},
}