arXiv Analytics

Sign in

arXiv:1806.02012 [cs.LG]AbstractReferencesReviewsResources

A Peek Into the Hidden Layers of a Convolutional Neural Network Through a Factorization Lens

Uday Singh Saini, Evangelos E. Papalexakis

Published 2018-06-06Version 1

Despite their increasing popularity and success in a variety of supervised learning problems, deep neural networks are extremely hard to interpret and debug: Given and already trained Deep Neural Net, and a set of test inputs, how can we gain insight into how those inputs interact with different layers of the neural network? Furthermore, can we characterize a given deep neural network based on it's observed behavior on different inputs? In this paper we propose a novel factorization based approach on understanding how different deep neural networks operate. In our preliminary results, we identify fascinating patterns that link the factorization rank (typically used as a measure of interestingness in unsupervised data analysis) with how well or poorly the deep network has been trained. Finally, our proposed approach can help provide visual insights on how high-level. interpretable patterns of the network's input behave inside the hidden layers of the deep network.

Related articles: Most relevant | Search more
arXiv:1912.05687 [cs.LG] (Published 2019-12-11)
REFINED (REpresentation of Features as Images with NEighborhood Dependencies): A novel feature representation for Convolutional Neural Networks
arXiv:1809.01564 [cs.LG] (Published 2018-09-05)
Traffic Density Estimation using a Convolutional Neural Network
arXiv:1809.04440 [cs.LG] (Published 2018-09-10)
Convolutional Neural Networks for Fast Approximation of Graph Edit Distance