{ "id": "1806.02012", "version": "v1", "published": "2018-06-06T05:27:38.000Z", "updated": "2018-06-06T05:27:38.000Z", "title": "A Peek Into the Hidden Layers of a Convolutional Neural Network Through a Factorization Lens", "authors": [ "Uday Singh Saini", "Evangelos E. Papalexakis" ], "categories": [ "cs.LG", "cs.CV", "stat.ML" ], "abstract": "Despite their increasing popularity and success in a variety of supervised learning problems, deep neural networks are extremely hard to interpret and debug: Given and already trained Deep Neural Net, and a set of test inputs, how can we gain insight into how those inputs interact with different layers of the neural network? Furthermore, can we characterize a given deep neural network based on it's observed behavior on different inputs? In this paper we propose a novel factorization based approach on understanding how different deep neural networks operate. In our preliminary results, we identify fascinating patterns that link the factorization rank (typically used as a measure of interestingness in unsupervised data analysis) with how well or poorly the deep network has been trained. Finally, our proposed approach can help provide visual insights on how high-level. interpretable patterns of the network's input behave inside the hidden layers of the deep network.", "revisions": [ { "version": "v1", "updated": "2018-06-06T05:27:38.000Z" } ], "analyses": { "keywords": [ "convolutional neural network", "hidden layers", "factorization lens", "networks input behave inside", "deep network" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }