arXiv Analytics

Sign in

arXiv:1907.00560 [cs.LG]AbstractReferencesReviewsResources

On Symmetry and Initialization for Neural Networks

Ido Nachum, Amir Yehudayoff

Published 2019-07-01Version 1

This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.

Related articles: Most relevant | Search more
arXiv:1706.02690 [cs.LG] (Published 2017-06-08)
Principled Detection of Out-of-Distribution Examples in Neural Networks
arXiv:1807.04225 [cs.LG] (Published 2018-07-11)
Measuring abstract reasoning in neural networks
arXiv:1805.07405 [cs.LG] (Published 2018-05-18)
Processing of missing data by neural networks