arXiv Analytics

Sign in

arXiv:1206.4639 [cs.LG]AbstractReferencesReviewsResources

Adaptive Regularization for Weight Matrices

Koby Crammer, Gal Chechik

Published 2012-06-18Version 1

Algorithms for learning distributions over weight-vectors, such as AROW were recently shown empirically to achieve state-of-the-art performance at various problems, with strong theoretical guaranties. Extending these algorithms to matrix models pose challenges since the number of free parameters in the covariance of the distribution scales as $n^4$ with the dimension $n$ of the matrix, and $n$ tends to be large in real applications. We describe, analyze and experiment with two new algorithms for learning distribution of matrix models. Our first algorithm maintains a diagonal covariance over the parameters and can handle large covariance matrices. The second algorithm factors the covariance to capture inter-features correlation while keeping the number of parameters linear in the size of the original matrix. We analyze both algorithms in the mistake bound model and show a superior precision performance of our approach over other algorithms in two tasks: retrieving similar images, and ranking similar documents. The factored algorithm is shown to attain faster convergence rate.

Related articles: Most relevant | Search more
arXiv:1908.05474 [cs.LG] (Published 2019-08-15)
Adaptive Regularization of Labels
arXiv:2303.13113 [cs.LG] (Published 2023-03-23)
Adaptive Regularization for Class-Incremental Learning
arXiv:2404.03147 [cs.LG] (Published 2024-04-04)
Eigenpruning