arXiv Analytics

Sign in

arXiv:2201.11813 [cs.LG]AbstractReferencesReviewsResources

Eigenvalues of Autoencoders in Training and at Initialization

Benjamin Dees, Susama Agarwala, Corey Lowman

Published 2022-01-27Version 1

In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time ($>$100 epochs). Additionally, we find that even at early epochs, these eigenvalue distributions rapidly become qualitatively similar to those of the fully trained autoencoders. We also compare the eigenvalues at initialization to pertinent theoretical work on the eigenvalues of random matrices and the products of such matrices.

Comments: 11 pages, 3 figures
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:2406.08447 [cs.LG] (Published 2024-06-12)
The Impact of Initialization on LoRA Finetuning Dynamics
arXiv:2209.07263 [cs.LG] (Published 2022-09-15)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
arXiv:2305.17559 [cs.LG] (Published 2023-05-27)
Pruning at Initialization -- A Sketching Perspective