arXiv Analytics

Sign in

arXiv:2005.10119 [stat.ME]AbstractReferencesReviewsResources

On the use of cross-validation for the calibration of the tuning parameter in the adaptive lasso

Ballout Nadim, Etievant Lola, Viallon Vivian

Published 2020-05-20Version 1

The adaptive lasso is a popular extension of the lasso, which was shown to generally enjoy better theoretical performance, at no additional computational cost in comparison to the lasso. The adaptive lasso relies on a weighted version of the $L_1$-norm penalty used in the lasso, where weights are typically derived from an initial estimate of the parameter vector. Irrespective of the method chosen to obtain this initial estimate, the performance of the corresponding version of the adaptive lasso critically depends on the value of the tuning parameter, which controls the magnitude of the weighted $L_1$-norm in the penalized criterion. In this article, we show that the standard cross-validation, although very popular in this context, has a severe defect when applied for the calibration of the tuning parameter in the adaptive lasso. We further propose a simple cross-validation scheme which corrects this defect. Empirical results from a simulation study confirms the superiority of our approach, in terms of both support recovery and prediction error. Although we focus on the adaptive lasso under linear regression models, our work likely extends to other regression models, as well as to the adaptive versions of other penalized approaches, including the group lasso, fused lasso, and data shared lasso

Comments: 17 pages, 2 figures
Categories: stat.ME
Subjects: 62J07
Related articles: Most relevant | Search more
arXiv:2001.11240 [stat.ME] (Published 2020-01-30)
Assessing the Calibration of Subdistribution Hazard Models in Discrete Time
arXiv:2202.03897 [stat.ME] (Published 2022-02-08)
Inference from Sampling with Response Probabilities Estimated via Calibration
arXiv:1404.0541 [stat.ME] (Published 2014-04-02, updated 2015-05-24)
Don't Fall for Tuning Parameters: Tuning-Free Variable Selection in High Dimensions With the TREX