arXiv Analytics

Sign in

arXiv:1802.03039 [stat.ML]AbstractReferencesReviewsResources

Imitation networks: Few-shot learning of neural networks from scratch

Akisato Kimura, Zoubin Ghahramani, Koh Takeuchi, Tomoharu Iwata, Naonori Ueda

Published 2018-02-08Version 1

In this paper, we propose imitation networks, a simple but effective method for training neural networks with a limited amount of training data. Our approach inherits the idea of knowledge distillation that transfers knowledge from a deep or wide reference model to a shallow or narrow target model. The proposed method employs this idea to mimic predictions of reference estimators that are much more robust against overfitting than the network we want to train. Different from almost all the previous work for knowledge distillation that requires a large amount of labeled training data, the proposed method requires only a small amount of training data. Instead, we introduce pseudo training examples that are optimized as a part of model parameters. Experimental results for several benchmark datasets demonstrate that the proposed method outperformed all the other baselines, such as naive training of the target model and standard knowledge distillation.

Related articles: Most relevant | Search more
arXiv:1803.10590 [stat.ML] (Published 2018-03-28, updated 2018-11-01)
Feed-forward Uncertainty Propagation in Belief and Neural Networks
arXiv:2009.13500 [stat.ML] (Published 2020-09-28)
A priori estimates for classification problems using neural networks
arXiv:2211.08654 [stat.ML] (Published 2022-11-16)
Prediction and Uncertainty Quantification of SAFARI-1 Axial Neutron Flux Profiles with Neural Networks