arXiv Analytics

Sign in

arXiv:1206.4678 [cs.LG]AbstractReferencesReviewsResources

Linear Regression with Limited Observation

Elad Hazan, Tomer Koren

Published 2012-06-18Version 1

We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for these problems: for Lasso and Ridge regression they need the same total number of attributes (up to constants) as do full-information algorithms, for reaching a certain accuracy. For Support-vector regression, we require exponentially less attributes compared to the state of the art. By that, we resolve an open problem recently posed by Cesa-Bianchi et al. (2010). Experiments show the theoretical bounds to be justified by superior performance compared to the state of the art.

Related articles: Most relevant | Search more
arXiv:1904.08544 [cs.LG] (Published 2019-04-18)
Memory-Sample Tradeoffs for Linear Regression with Small Error
arXiv:2305.16440 [cs.LG] (Published 2023-05-25)
Representation Transfer Learning via Multiple Pre-trained models for Linear Regression
arXiv:1912.03036 [cs.LG] (Published 2019-12-06)
Improved PAC-Bayesian Bounds for Linear Regression