arXiv Analytics

Sign in

arXiv:1807.08140 [cs.LG]AbstractReferencesReviewsResources

On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

Adepu Ravi Sankar, Vishwak Srinivasan, Vineeth N Balasubramanian

Published 2018-07-21Version 1

Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years. In this work, we theoretically study the importance of noise in the trajectories of gradient descent towards optimal solutions in multi-layer neural networks. We show that adding noise (in different ways) to a neural network while training increases the rank of the product of weight matrices of a multi-layer linear neural network. We thus study how adding noise can assist reaching a global optimum when the product matrix is full-rank (under certain conditions). We establish theoretical foundations between the noise induced into the neural network - either to the gradient, to the architecture, or to the input/output to a neural network - and the rank of product of weight matrices. We corroborate our theoretical findings with empirical results.

Comments: 4 pages + 1 figure (main, excluding references), 5 pages + 4 figures (appendix)
Categories: cs.LG, math.OC, stat.ML
Related articles: Most relevant | Search more
arXiv:1710.10570 [cs.LG] (Published 2017-10-29)
Weight Initialization of Deep Neural Networks(DNNs) using Data Statistics
arXiv:1611.05162 [cs.LG] (Published 2016-11-16)
Net-Trim: A Layer-wise Convex Pruning of Deep Neural Networks
arXiv:1711.06104 [cs.LG] (Published 2017-11-16)
A unified view of gradient-based attribution methods for Deep Neural Networks