arXiv Analytics

Sign in

arXiv:2001.05992 [cs.LG]AbstractReferencesReviewsResources

Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks

Wei Hu, Lechao Xiao, Jeffrey Pennington

Published 2020-01-16Version 1

The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.

Comments: International Conference on Learning Representations (ICLR) 2020
Categories: cs.LG, cs.NE, math.OC, stat.ML
Related articles: Most relevant | Search more
arXiv:2402.05013 [cs.LG] (Published 2024-02-07)
Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth
arXiv:2306.00267 [cs.LG] (Published 2023-06-01)
Provable Benefit of Mixup for Finding Optimal Decision Boundaries
arXiv:2206.05900 [cs.LG] (Published 2022-06-13)
Provable Benefit of Multitask Representation Learning in Reinforcement Learning