arXiv Analytics

Sign in

arXiv:1810.04651 [stat.ME]AbstractReferencesReviewsResources

Principal component-guided sparse regression

J. Kenneth Tay, Jerome Friedman, Robert Tibshirani

Published 2018-10-10Version 1

We propose a new method for supervised learning, especially suited to wide data where the number of features is much greater than the number of observations. The method combines the lasso ($\ell_1$) sparsity penalty with a quadratic penalty that shrinks the coefficient vector toward the leading principal components of the feature matrix. We call the proposed method the "Lariat". The method can be especially powerful if the features are pre-assigned to groups (such as cell-pathways, assays or protein interaction networks). In that case, the Lariat shrinks each group-wise component of the solution toward the leading principal components of that group. In the process, it also carries out selection of the feature groups. We provide some theory for this method and illustrate it on a number of simulated and real data examples.

Related articles: Most relevant | Search more
arXiv:1809.03643 [stat.ME] (Published 2018-09-11)
Threshold factor models for high-dimensional time series
arXiv:1708.04981 [stat.ME] (Published 2017-08-16)
On the number of principal components in high dimensions
arXiv:1606.02234 [stat.ME] (Published 2016-06-07)
Robust bent line regression