arXiv Analytics

Sign in

arXiv:2111.12994 [cs.CV]AbstractReferencesReviewsResources

NomMer: Nominate Synergistic Context in Vision Transformer for Visual Recognition

Hao Liu, Xinghua Jiang, Xin Li, Zhimin Bao, Deqiang Jiang, Bo Ren

Published 2021-11-25, updated 2022-03-14Version 2

Recently, Vision Transformers (ViT), with the self-attention (SA) as the de facto ingredients, have demonstrated great potential in the computer vision community. For the sake of trade-off between efficiency and performance, a group of works merely perform SA operation within local patches, whereas the global contextual information is abandoned, which would be indispensable for visual recognition tasks. To solve the issue, the subsequent global-local ViTs take a stab at marrying local SA with global one in parallel or alternative way in the model. Nevertheless, the exhaustively combined local and global context may exist redundancy for various visual data, and the receptive field within each layer is fixed. Alternatively, a more graceful way is that global and local context can adaptively contribute per se to accommodate different visual data. To achieve this goal, we in this paper propose a novel ViT architecture, termed NomMer, which can dynamically Nominate the synergistic global-local context in vision transforMer. By investigating the working pattern of our proposed NomMer, we further explore what context information is focused. Beneficial from this "dynamic nomination" mechanism, without bells and whistles, the NomMer can not only achieve 84.5% Top-1 classification accuracy on ImageNet with only 73M parameters, but also show promising performance on dense prediction tasks, i.e., object detection and semantic segmentation. The code and models will be made publicly available at https://github.com/TencentYoutuResearch/VisualRecognition-NomMer

Related articles: Most relevant | Search more
arXiv:2203.05922 [cs.CV] (Published 2022-03-11)
Visualizing and Understanding Patch Interactions in Vision Transformer
arXiv:2304.04354 [cs.CV] (Published 2023-04-10)
ViT-Calibrator: Decision Stream Calibration for Vision Transformer
arXiv:2303.14341 [cs.CV] (Published 2023-03-25)
Towards Accurate Post-Training Quantization for Vision Transformer