arXiv Analytics

Sign in

arXiv:2302.01327 [cs.CV]AbstractReferencesReviewsResources

Dual PatchNorm

Manoj Kumar, Mostafa Dehghani, Neil Houlsby

Published 2023-02-02Version 1

We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. We demonstrate that Dual PatchNorm outperforms the result of exhaustive search for alternative LayerNorm placement strategies in the Transformer block itself. In our experiments, incorporating this trivial modification, often leads to improved accuracy over well-tuned Vision Transformers and never hurts.

Related articles: Most relevant | Search more
arXiv:2104.06637 [cs.CV] (Published 2021-04-14)
Decoupled Spatial-Temporal Transformer for Video Inpainting
Rui Liu et al.
arXiv:2107.08623 [cs.CV] (Published 2021-07-19)
LeViT-UNet: Make Faster Encoders with Transformer for Medical Image Segmentation
arXiv:2209.02197 [cs.CV] (Published 2022-09-06)
LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field Images