arXiv Analytics

Sign in

arXiv:2007.08243 [cs.LG]AbstractReferencesReviewsResources

Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning

Bryn Elesedy, Varun Kanade, Yee Whye Teh

Published 2020-07-16Version 1

We analyse the pruning procedure behind the lottery ticket hypothesis arXiv:1803.03635v5 [cs.LG], iterative magnitude pruning (IMP), when applied to linear models trained by gradient flow. We begin by presenting sufficient conditions on the statistical structure of the features, under which IMP prunes those features that have smallest projection onto the data. Following this, we explore IMP as a method for sparse estimation and sparse prediction in noisy settings, with minimal assumptions on the design matrix. The same techniques are then applied to derive corresponding results for threshold pruning. Finally, we present experimental evidence of the regularising effect of IMP. We hope that our work will contribute to a theoretically grounded understanding of lottery tickets and how they emerge from IMP.

Related articles: Most relevant | Search more
arXiv:2308.03128 [cs.LG] (Published 2023-08-06)
Iterative Magnitude Pruning as a Renormalisation Group: A Study in The Context of The Lottery Ticket Hypothesis
arXiv:2002.00585 [cs.LG] (Published 2020-02-03)
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
arXiv:1811.01564 [cs.LG] (Published 2018-11-05)
Parallel training of linear models without compromising convergence