arXiv Analytics

Sign in

arXiv:1807.11459 [cs.CV]AbstractReferencesReviewsResources

Improving Transferability of Deep Neural Networks

Parijat Dube, Bishwaranjan Bhattacharjee, Elisabeth Petit-Bois, Matthew Hill

Published 2018-07-30Version 1

Learning from small amounts of labeled data is a challenge in the area of deep learning. This is currently addressed by Transfer Learning where one learns the small data set as a transfer task from a larger source dataset. Transfer Learning can deliver higher accuracy if the hyperparameters and source dataset are chosen well. One of the important parameters is the learning rate for the layers of the neural network. We show through experiments on the ImageNet22k and Oxford Flowers datasets that improvements in accuracy in range of 127% can be obtained by proper choice of learning rates. We also show that the images/label parameter for a dataset can potentially be used to determine optimal learning rates for the layers to get the best overall accuracy. We additionally validate this method on a sample of real-world image classification tasks from a public visual recognition API.

Comments: 15 pages, 11 figures, 2 tables, Workshop on Domain Adaptation for Visual Understanding (Joint IJCAI/ECAI/AAMAS/ICML 2018 Workshop) Keywords: deep learning, transfer learning, finetuning, deep neural network, experimental
Categories: cs.CV, cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1703.06857 [cs.CV] (Published 2017-03-20)
Deep Neural Networks Do Not Recognize Negative Images
arXiv:1605.02699 [cs.CV] (Published 2016-05-09)
A Theoretical Analysis of Deep Neural Networks for Texture Classification
arXiv:1502.02445 [cs.CV] (Published 2015-02-09)
Deep Neural Networks for Anatomical Brain Segmentation