arXiv Analytics

Sign in

arXiv:2006.02587 [cs.LG]AbstractReferencesReviewsResources

XGNN: Towards Model-Level Explanations of Graph Neural Networks

Hao Yuan, Jiliang Tang, Xia Hu, Shuiwang Ji

Published 2020-06-03Version 1

Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information, which have achieved promising performance on many graph tasks. However, GNNs are mostly treated as black-boxes and lack human intelligible explanations. Thus, they cannot be fully trusted and used in certain application domains if GNN models cannot be explained. In this work, we propose a novel approach, known as XGNN, to interpret GNNs at the model-level. Our approach can provide high-level insights and generic understanding of how GNNs work. In particular, we propose to explain GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.We formulate the graph generation as a reinforcement learning task, where for each step, the graph generator predicts how to add an edge into the current graph. The graph generator is trained via a policy gradient method based on information from the trained GNNs. In addition, we incorporate several graph rules to encourage the generated graphs to be valid. Experimental results on both synthetic and real-world datasets show that our proposed methods help understand and verify the trained GNNs. Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs.

Related articles: Most relevant | Search more
arXiv:1910.07567 [cs.LG] (Published 2019-10-16)
Active Learning for Graph Neural Networks via Node Feature Propagation
arXiv:1903.11960 [cs.LG] (Published 2019-03-28)
Learning Discrete Structures for Graph Neural Networks
arXiv:1908.07110 [cs.LG] (Published 2019-08-19)
Graph Neural Networks with High-order Feature Interactions