arXiv Analytics

Sign in

arXiv:2410.17628 [cs.LG]AbstractReferencesReviewsResources

Feature Learning in Attention Mechanisms Is More Compact and Stable Than in Convolution

Baiyuan Chen

Published 2024-10-23Version 1

Attention and convolution are fundamental techniques in machine learning. While they use different approaches to learn features - attention mechanisms capture both global and local data relathionships, while convolutional layers focus on local patterns - both methods are effective for various tasks. Although the feature learning of both models is well-studied individually, there has not been a direct comparison of their feature learning dynamics. In this paper, we compare their Lipschitz continuity with respect to the Wasserstein distance and covering numbers under similar settings. We demonstrate that attention processes data in a more compact and stable manner. Compactness refers to the lower variance and intrinsic dimensionality of the activation outputs, while stability refers to the changes between inputs and outputs. We validate our findings through experiments using topological data analysis, measuring the 1-, 2-, and infinity-Wasserstein distances between the outputs of each layer from both models. Furthermore, we extend our comparison to Vision Transformers (ViTs) and ResNets, showing that while ViTs have higher output variance, their feature learning is more stable than that of ResNets.

Related articles: Most relevant | Search more
arXiv:1809.03267 [cs.LG] (Published 2018-09-07)
Feature Learning for Meta-Paths in Knowledge Graphs
arXiv:2403.03375 [cs.LG] (Published 2024-03-05, updated 2024-06-16)
Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations
arXiv:2011.14522 [cs.LG] (Published 2020-11-30)
Feature Learning in Infinite-Width Neural Networks