arXiv Analytics

Sign in

arXiv:1511.05497 [cs.LG]AbstractReferencesReviewsResources

Learning the Architecture of Deep Neural Networks

Suraj Srinivas, R. Venkatesh Babu

Published 2015-11-17Version 1

Deep neural networks with millions of parameters are at the heart of many state of the art machine learning models today. However, recent works have shown that models with much smaller number of parameters can also perform just as well. In this work, we introduce the problem of architecture-learning, i.e; learning the architecture of a neural network along with weights. We introduce a new trainable parameter called tri-state ReLU, which helps in eliminating unnecessary neurons. We also propose a smooth regularizer which encourages the total number of neurons after elimination to be small. The resulting objective is differentiable and simple to optimize. We experimentally validate our method on both small and large networks, and show that it can learn models with a considerably small number of parameters without affecting prediction accuracy.

Related articles: Most relevant | Search more
arXiv:2103.04331 [cs.LG] (Published 2021-03-07)
Auto-tuning of Deep Neural Networks by Conflicting Layer Removal
arXiv:1509.08745 [cs.LG] (Published 2015-09-29)
Compression of Deep Neural Networks on the Fly
arXiv:1601.00917 [cs.LG] (Published 2016-01-05)
Distilling Reverse-Mode Automatic Differentiation (DrMAD) for Optimizing Hyperparameters of Deep Neural Networks