arXiv Analytics

Sign in

arXiv:1912.08286 [cs.LG]AbstractReferencesReviewsResources

On the Bias-Variance Tradeoff: Textbooks Need an Update

Brady Neal

Published 2019-12-17Version 1

The main goal of this thesis is to point out that the bias-variance tradeoff is not always true (e.g. in neural networks). We advocate for this lack of universality to be acknowledged in textbooks and taught in introductory courses that cover the tradeoff. We first review the history of the bias-variance tradeoff, its prevalence in textbooks, and some of the main claims made about the bias-variance tradeoff. Through extensive experiments and analysis, we show a lack of a bias-variance tradeoff in neural networks when increasing network width. Our findings seem to contradict the claims of the landmark work by Geman et al. (1992). Motivated by this contradiction, we revisit the experimental measurements in Geman et al. (1992). We discuss that there was never strong evidence for a tradeoff in neural networks when varying the number of parameters. We observe a similar phenomenon beyond supervised learning, with a set of deep reinforcement learning experiments. We argue that textbook and lecture revisions are in order to convey this nuanced modern understanding of the bias-variance tradeoff.

Related articles: Most relevant | Search more
arXiv:1810.10032 [cs.LG] (Published 2018-10-23)
Some negative results for Neural Networks
arXiv:1810.08591 [cs.LG] (Published 2018-10-19)
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
arXiv:1807.04225 [cs.LG] (Published 2018-07-11)
Measuring abstract reasoning in neural networks