arXiv Analytics

Sign in

arXiv:2001.04974 [cs.LG]AbstractReferencesReviewsResources

Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation

Chuteng Zhou, Prad Kadambi, Matthew Mattina, Paul N. Whatmough

Published 2020-01-14Version 1

The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as two times greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.

Related articles: Most relevant | Search more
arXiv:2211.00683 [cs.LG] (Published 2022-11-01)
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
arXiv:2310.18590 [cs.LG] (Published 2023-10-28)
Using Early Readouts to Mediate Featural Bias in Distillation
arXiv:2211.16080 [cs.LG] (Published 2022-11-29)
Understanding and Enhancing Robustness of Concept-based Models