arXiv Analytics

Sign in

arXiv:2001.02522 [cs.LG]AbstractReferencesReviewsResources

On Interpretability of Artificial Neural Networks

Fenglei Fan, Jinjun Xiong, Ge Wang

Published 2020-01-08Version 1

Deep learning has achieved great successes in many important areas to dealing with text, images, video, graphs, and so on. However, the black-box nature of deep artificial neural networks has become the primary obstacle to their public acceptance and wide popularity in critical applications such as diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has become one of the most critical research directions. In this paper, we systematically review recent studies in understanding the mechanism of neural networks and shed light on some future directions of interpretability research (This work is still in progress).

Related articles: Most relevant | Search more
arXiv:2312.16191 [cs.LG] (Published 2023-12-22)
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
arXiv:1811.10469 [cs.LG] (Published 2018-11-21)
How to improve the interpretability of kernel learning
arXiv:1910.03081 [cs.LG] (Published 2019-10-07)
On the Interpretability and Evaluation of Graph Representation Learning