arXiv Analytics

Sign in

arXiv:2002.11102 [cs.LG]AbstractReferencesReviewsResources

On Feature Normalization and Data Augmentation

Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, Kilian Q. Weinberger

Published 2020-02-25Version 1

Modern neural network training relies heavily on data augmentation for improved generalization. After the initial success of label-preserving augmentations, there has been a recent surge of interest in label-perturbing approaches, which combine features and labels across training samples to smooth the learned decision surface. In this paper, we propose a new augmentation method that leverages the first and second moments extracted and re-injected by feature normalization. We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels. As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation methods. We demonstrate its efficacy across benchmark data sets in computer vision, speech, and natural language processing, where it consistently improves the generalization performance of highly competitive baseline networks.

Related articles: Most relevant | Search more
arXiv:1904.09135 [cs.LG] (Published 2019-04-19)
Data Augmentation Using GANs
arXiv:2004.04795 [cs.LG] (Published 2020-04-09)
Exemplar VAEs for Exemplar based Generation and Data Augmentation
arXiv:2207.07875 [cs.LG] (Published 2022-07-16)
On the Importance of Hyperparameters and Data Augmentation for Self-Supervised Learning