arXiv Analytics

Sign in

arXiv:2103.01887 [stat.ML]AbstractReferencesReviewsResources

Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Published 2021-03-02Version 1

We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that "fits" a training data set as accurately as possible as quantified by the training error; and study the following question: \emph{does a low training error guarantee that the norm of the output layer (outer norm) itself is small?} We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in $d$) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector $X\in\mathbb{R}^d$ need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through \emph{fat-shattering dimension}, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in $d$ with a low degree), and are in fact near-linear for some important cases of interest.

Comments: 34 pages. Some of the results in the present paper are significantly strengthened versions of certain results appearing in arXiv:2003.10523
Related articles: Most relevant | Search more
arXiv:2102.06548 [stat.ML] (Published 2021-02-12)
Tightening the Dependence on Horizon in the Sample Complexity of Q-Learning
arXiv:2409.01243 [stat.ML] (Published 2024-09-02)
Sample Complexity of the Sign-Perturbed Sums Method
arXiv:1011.5395 [stat.ML] (Published 2010-11-24)
The Sample Complexity of Dictionary Learning