arXiv Analytics

Sign in

arXiv:2002.05712 [cs.LG]AbstractReferencesReviewsResources

Cross-Iteration Batch Normalization

Zhuliang Yao, Yue Cao, Shuxin Zheng, Gao Huang, Stephen Lin

Published 2020-02-13Version 1

A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. When a mini-batch contains few examples, the statistics upon which the normalization is defined cannot be reliably estimated from it during a training iteration. To address this problem, we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge of computing statistics over multiple iterations is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated and batch normalization can be effectively applied. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique.

Related articles: Most relevant | Search more
arXiv:1402.7025 [cs.LG] (Published 2014-02-26, updated 2014-03-04)
Exploiting the Statistics of Learning and Inference
arXiv:1911.00482 [cs.LG] (Published 2019-11-01)
High-dimensional Nonlinear Profile Monitoring based on Deep Probabilistic Autoencoders
arXiv:1701.02886 [cs.LG] (Published 2017-01-11)
The empirical Christoffel function in Statistics and Machine Learning