arXiv Analytics

Sign in

arXiv:1707.09641 [cs.LG]AbstractReferencesReviewsResources

Visual Explanations for Convolutional Neural Networks via Input Resampling

Benjamin J. Lengerich, Sandeep Konam, Eric P. Xing, Stephanie Rosenthal, Manuela Veloso

Published 2017-07-30Version 1

The predictive power of neural networks often costs model interpretability. Several techniques have been developed for explaining model outputs in terms of input features; however, it is difficult to translate such interpretations into actionable insight. Here, we propose a framework to analyze predictions in terms of the model's internal features by inspecting information flow through the network. Given a trained network and a test image, we select neurons by two metrics, both measured over a set of images created by perturbations to the input image: (1) magnitude of the correlation between the neuron activation and the network output and (2) precision of the neuron activation. We show that the former metric selects neurons that exert large influence over the network output while the latter metric selects neurons that activate on generalizable features. By comparing the sets of neurons selected by these two metrics, our framework suggests a way to investigate the internal attention mechanisms of convolutional neural networks.

Comments: Presented at ICML Workshop on Visualization for Deep Learning
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1811.03436 [cs.LG] (Published 2018-11-08)
Alpha-Pooling for Convolutional Neural Networks
arXiv:1807.01332 [cs.LG] (Published 2018-07-03)
Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification
arXiv:1806.07174 [cs.LG] (Published 2018-06-19)
FRnet-DTI: Convolutional Neural Networks for Drug-Target Interaction