arXiv Analytics

Sign in

arXiv:1907.00560 [cs.LG]AbstractReferencesReviewsResources

On Symmetry and Initialization for Neural Networks

Ido Nachum, Amir Yehudayoff

Published 2019-07-01Version 1

This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.

Related articles: Most relevant | Search more
arXiv:1706.02690 [cs.LG] (Published 2017-06-08)
Principled Detection of Out-of-Distribution Examples in Neural Networks
arXiv:1810.10032 [cs.LG] (Published 2018-10-23)
Some negative results for Neural Networks
arXiv:1810.08591 [cs.LG] (Published 2018-10-19)
A Modern Take on the Bias-Variance Tradeoff in Neural Networks