arXiv Analytics

Sign in

arXiv:1712.05577 [cs.LG]AbstractReferencesReviewsResources

Gradients explode - Deep Networks are shallow - ResNet explained

George Philipp, Dawn Song, Jaime G. Carbonell

Published 2017-12-15Version 1

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities "solve" the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why exploding gradients occur and highlight the *collapsing domain problem*, which can arise in architectures that avoid exploding gradients. ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property. By noticing that *any neural network is a residual network*, we devise the *residual trick*, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.

Related articles: Most relevant | Search more
arXiv:1709.08524 [cs.LG] (Published 2017-09-25)
Generative learning for deep networks
arXiv:1602.02644 [cs.LG] (Published 2016-02-08)
Generating Images with Perceptual Similarity Metrics based on Deep Networks
arXiv:1511.06485 [cs.LG] (Published 2015-11-20)
Trivializing The Energy Landscape Of Deep Networks