arXiv Analytics

Sign in

arXiv:1605.06465 [cs.CV]AbstractReferencesReviewsResources

Swapout: Learning an ensemble of deep architectures

Saurabh Singh, Derek Hoiem, David Forsyth

Published 2016-05-20Version 1

We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.

Related articles: Most relevant | Search more
arXiv:2309.11851 [cs.CV] (Published 2023-09-21)
DEYOv3: DETR with YOLO for Real-time Object Detection
arXiv:1609.09018 [cs.CV] (Published 2016-09-28)
Deep Architectures for Face Attributes
arXiv:2206.06430 [cs.CV] (Published 2022-06-13)
A Training Method For VideoPose3D With Ideology of Action Recognition