arXiv Analytics

Sign in

arXiv:1711.06104 [cs.LG]AbstractReferencesReviewsResources

A unified view of gradient-based attribution methods for Deep Neural Networks

Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross

Published 2017-11-16Version 1

Understanding the flow of information in Deep Neural Networks is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, only few attempts to analyze them from a theoretical perspective have been made in the past. In this work we analyze various state-of-the-art attribution methods and prove unexplored connections between them. We also show how some methods can be reformulated and more conveniently implemented. Finally, we perform an empirical evaluation with six attribution methods on a variety of tasks and architectures and discuss their strengths and limitations.

Comments: Accepted at NIPS 2017 - Workshop Interpreting, Explaining and Visualizing Deep Learning
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1611.05162 [cs.LG] (Published 2016-11-16)
Net-Trim: A Layer-wise Convex Pruning of Deep Neural Networks
arXiv:1710.10570 [cs.LG] (Published 2017-10-29)
Weight Initialization of Deep Neural Networks(DNNs) using Data Statistics
arXiv:1603.09260 [cs.LG] (Published 2016-03-30)
Degrees of Freedom in Deep Neural Networks