arXiv Analytics

Sign in

arXiv:1710.10570 [cs.LG]AbstractReferencesReviewsResources

Weight Initialization of Deep Neural Networks(DNNs) using Data Statistics

Saiprasad Koturwar, Shabbir Merchant

Published 2017-10-29Version 1

Deep neural networks (DNNs) form the backbone of almost every state-of-the-art technique in the fields such as computer vision, speech processing, and text analysis. The recent advances in computational technology have made the use of DNNs more practical. Despite the overwhelming performances by DNN and the advances in computational technology, it is seen that very few researchers try to train their models from the scratch. Training of DNNs still remains a difficult and tedious job. The main challenges that researchers face during training of DNNs are the vanishing/exploding gradient problem and the highly non-convex nature of the objective function which has up to million variables. The approaches suggested in He and Xavier solve the vanishing gradient problem by providing a sophisticated initialization technique. These approaches have been quite effective and have achieved good results on standard datasets, but these same approaches do not work very well on more practical datasets. We think the reason for this is not making use of data statistics for initializing the network weights. Optimizing such a high dimensional loss function requires careful initialization of network weights. In this work, we propose a data dependent initialization and analyze its performance against the standard initialization techniques such as He and Xavier. We performed our experiments on some practical datasets and the results show our algorithm's superior classification accuracy.

Related articles: Most relevant | Search more
arXiv:1711.06104 [cs.LG] (Published 2017-11-16)
A unified view of gradient-based attribution methods for Deep Neural Networks
arXiv:1611.05162 [cs.LG] (Published 2016-11-16)
Net-Trim: A Layer-wise Convex Pruning of Deep Neural Networks
arXiv:1603.09260 [cs.LG] (Published 2016-03-30)
Degrees of Freedom in Deep Neural Networks