arXiv Analytics

Sign in

arXiv:1902.08160 [cs.LG]AbstractReferencesReviewsResources

Topology of Learning in Artificial Neural Networks

Maxime Gabella, Nitya Afambo, Stefania Ebli, Gard Spreemann

Published 2019-02-21Version 1

Understanding how neural networks learn remains one of the central challenges in machine learning research. From random at the start of training, the weights of a neural network evolve in such a way as to be able to perform a variety of tasks, like classifying images. Here we study the emergence of structure in the weights by applying methods from topological data analysis. We train simple feedforward neural networks on the MNIST dataset and monitor the evolution of the weights. When initialized to zero, the weights follow trajectories that branch off recurrently, thus generating trees that describe the growth of the effective capacity of each layer. When initialized to tiny random values, the weights evolve smoothly along two-dimensional surfaces. We show that natural coordinates on these learning surfaces correspond to important factors of variation.

Related articles: Most relevant | Search more
arXiv:2108.01724 [cs.LG] (Published 2021-08-03)
Approximating Attributed Incentive Salience In Large Scale Scenarios. A Representation Learning Approach Based on Artificial Neural Networks
arXiv:2004.07692 [cs.LG] (Published 2020-04-16)
A Hybrid Objective Function for Robustness of Artificial Neural Networks -- Estimation of Parameters in a Mechanical System
arXiv:1904.12770 [cs.LG] (Published 2019-04-29)
A Review of Modularization Techniques in Artificial Neural Networks