arXiv Analytics

Sign in

arXiv:1611.06321 [cs.CV]AbstractReferencesReviewsResources

Learning the Number of Neurons in Deep Networks

Jose M Alvarez, Mathieu Salzmann

Published 2016-11-19Version 1

Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy.

Related articles: Most relevant | Search more
arXiv:1703.01775 [cs.CV] (Published 2017-03-06)
Building a Regular Decision Boundary with Deep Networks
arXiv:1704.01246 [cs.CV] (Published 2017-04-05)
Estimation of Tissue Microstructure Using a Deep Network Inspired by a Sparse Reconstruction Framework
arXiv:1801.04261 [cs.CV] (Published 2018-01-12)
Deep saliency: What is learnt by a deep network about saliency?