arXiv Analytics

Sign in

arXiv:2006.05057 [cs.LG]AbstractReferencesReviewsResources

Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access

Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei

Published 2020-06-09Version 1

We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show that the common gradient-based white-box attacks can be generalized to the black-box setting via the connection between the gradient and an importance score similar to PageRank. In practice, we find attacks based on this importance score indeed increase the classification loss by a large margin, but they fail to significantly increase the mis-classification rate. Our theoretical and empirical analyses suggest that there is a discrepancy between the loss and mis-classification rate, as the latter presents a diminishing-return pattern when the number of attacked nodes increases. Therefore, we propose a greedy procedure to correct the importance score that takes into account of the diminishing-return pattern. Experimental results show that the proposed procedure can significantly increase the mis-classification rate of common GNNs on real-world data without access to model parameters nor predictions.

Related articles: Most relevant | Search more
arXiv:1908.07110 [cs.LG] (Published 2019-08-19)
Graph Neural Networks with High-order Feature Interactions
arXiv:2006.02587 [cs.LG] (Published 2020-06-03)
XGNN: Towards Model-Level Explanations of Graph Neural Networks
arXiv:2006.00144 [cs.LG] (Published 2020-05-30)
Understanding the Message Passing in Graph Neural Networks via Power Iteration