arXiv Analytics

Sign in

arXiv:1905.04497 [cs.LG]AbstractReferencesReviewsResources

Stability Properties of Graph Neural Networks

Fernando Gama, Joan Bruna, Alejandro Ribeiro

Published 2019-05-11Version 1

Data stemming from networks exhibit an irregular support, whereby each data element is related by arbitrary pairwise relationships determined by the network. Graph neural networks (GNNs) have emerged as information processing architectures that exploit the particularities of this underlying support. The use of nonlinearities in GNNs, coupled with the fact that filters are learned from data, raises mathematical challenges that have precluded the development of theoretical results that would give insight in the reasons for the remarkable performance of GNNs. In this work, we prove the property of stability, that states that a small change in the support of the data leads to a small (bounded) change in the output of the GNN. More specifically, we prove that the bound on the output difference of the GNN computed on one graph or another, is proportional to the difference between the graphs and the design parameters of the GNN, as long as the trained filters are integral Lipschitz. We exploit this result to provide some insights in the crucial effect that nonlinearities have in obtaining an architecture that is both stable and selective, a feat that is impossible to achieve if using only linear filters.

Related articles: Most relevant | Search more
arXiv:2004.11934 [cs.LG] (Published 2020-04-24)
Explainable Unsupervised Change-point Detection via Graph Neural Networks
arXiv:1912.07721 [cs.LG] (Published 2019-12-16)
Adversarial Model Extraction on Graph Neural Networks
arXiv:1912.07832 [cs.LG] (Published 2019-12-17)
Deep Iterative and Adaptive Learning for Graph Neural Networks