{ "id": "2006.05525", "version": "v1", "published": "2020-06-09T21:47:17.000Z", "updated": "2020-06-09T21:47:17.000Z", "title": "Knowledge Distillation: A Survey", "authors": [ "Jianping Gou", "Baosheng Yu", "Stephen John Maybank", "Dacheng Tao" ], "comment": "30 pages, 12 figures", "categories": [ "cs.LG", "stat.ML" ], "abstract": "In recent years, deep neural networks have been very successful in the fields of both industry and academia, especially for the applications of visual recognition and neural language processing. The great success of deep learning mainly owes to its great scalabilities to both large-scale data samples and billions of model parameters. However, it also poses a great challenge for the deployment of these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the great computational complexity but also the storage. To this end, a variety of model compression and acceleration techniques have been developed, such as pruning, quantization, and neural architecture search. As a typical model compression and acceleration method, knowledge distillation aims to learn a small student model from a large teacher model and has received increasing attention from the community. In this paper, we provide a comprehensive survey on knowledge distillation from the perspectives of different knowledge categories, training schemes, distillation algorithms, as well as applications. Furthermore, we briefly review challenges in knowledge distillation and provide some insights on the subject of future study.", "revisions": [ { "version": "v1", "updated": "2020-06-09T21:47:17.000Z" } ], "analyses": { "keywords": [ "model compression", "small student model", "knowledge distillation aims", "neural architecture search", "great computational complexity" ], "note": { "typesetting": "TeX", "pages": 30, "language": "en", "license": "arXiv", "status": "editable" } } }