arXiv Analytics

Sign in

arXiv:2006.06878 [cs.LG]AbstractReferencesReviewsResources

Optimization Theory for ReLU Neural Networks Trained with Normalization Layers

Yonatan Dukler, Quanquan Gu, Guido Montúfar

Published 2020-06-11Version 1

The success of deep neural networks is in part due to the use of normalization layers. Normalization layers like Batch Normalization, Layer Normalization and Weight Normalization are ubiquitous in practice, as they improve generalization performance and speed up training significantly. Nonetheless, the vast majority of current deep learning theory and non-convex optimization literature focuses on the un-normalized setting, where the functions under consideration do not exhibit the properties of commonly normalized neural networks. In this paper, we bridge this gap by giving the first global convergence result for two-layer neural networks with ReLU activations trained with a normalization layer, namely Weight Normalization. Our analysis shows how the introduction of normalization layers changes the optimization landscape and can enable faster convergence as compared with un-normalized neural networks.

Related articles: Most relevant | Search more
arXiv:2305.05448 [cs.LG] (Published 2023-05-09)
Robust Implicit Regularization via Weight Normalization
arXiv:2101.09306 [cs.LG] (Published 2021-01-22)
Partition-Based Convex Relaxations for Certifying the Robustness of ReLU Neural Networks
arXiv:1809.07122 [cs.LG] (Published 2018-09-19)
Capacity Control of ReLU Neural Networks by Basis-path Norm