arXiv Analytics

Sign in

arXiv:2006.05525 [cs.LG]AbstractReferencesReviewsResources

Knowledge Distillation: A Survey

Jianping Gou, Baosheng Yu, Stephen John Maybank, Dacheng Tao

Published 2020-06-09Version 1

In recent years, deep neural networks have been very successful in the fields of both industry and academia, especially for the applications of visual recognition and neural language processing. The great success of deep learning mainly owes to its great scalabilities to both large-scale data samples and billions of model parameters. However, it also poses a great challenge for the deployment of these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the great computational complexity but also the storage. To this end, a variety of model compression and acceleration techniques have been developed, such as pruning, quantization, and neural architecture search. As a typical model compression and acceleration method, knowledge distillation aims to learn a small student model from a large teacher model and has received increasing attention from the community. In this paper, we provide a comprehensive survey on knowledge distillation from the perspectives of different knowledge categories, training schemes, distillation algorithms, as well as applications. Furthermore, we briefly review challenges in knowledge distillation and provide some insights on the subject of future study.

Related articles: Most relevant | Search more
arXiv:2006.07556 [cs.LG] (Published 2020-06-13)
Neural Architecture Search using Bayesian Optimisation with Weisfeiler-Lehman Kernel
arXiv:1909.02453 [cs.LG] (Published 2019-09-05)
Best Practices for Scientific Research on Neural Architecture Search
arXiv:1909.03615 [cs.LG] (Published 2019-09-09)
Neural Architecture Search in Embedding Space