arXiv Analytics

Sign in

arXiv:2204.08624 [cs.LG]AbstractReferencesReviewsResources

Topology and geometry of data manifold in deep learning

German Magai, Anton Ayzenberg

Published 2022-04-19Version 1

Despite significant advances in the field of deep learning in applications to various fields, explaining the inner processes of deep learning models remains an important and open question. The purpose of this article is to describe and substantiate the geometric and topological view of the learning process of neural networks. Our attention is focused on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of the data manifold on different layers. We also propose a method for assessing the generalizing ability of neural networks based on topological descriptors. In this paper, we use the concepts of topological data analysis and intrinsic dimension, and we present a wide range of experiments on different datasets and different configurations of convolutional neural network architectures. In addition, we consider the issue of the geometry of adversarial attacks in the classification task and spoofing attacks on face recognition systems. Our work is a contribution to the development of an important area of explainable and interpretable AI through the example of computer vision.

Related articles: Most relevant | Search more
arXiv:2308.13792 [cs.LG] (Published 2023-08-26)
Out-of-distribution detection using normalizing flows on the data manifold
arXiv:1812.05836 [cs.LG] (Published 2018-12-14)
Rethinking Layer-wise Feature Amounts in Convolutional Neural Network Architectures
arXiv:2210.07100 [cs.LG] (Published 2022-10-13)
Dissipative residual layers for unsupervised implicit parameterization of data manifolds