arXiv Analytics

Sign in

arXiv:1912.07721 [cs.LG]AbstractReferencesReviewsResources

Adversarial Model Extraction on Graph Neural Networks

David DeFazio, Arti Ramesh

Published 2019-12-16Version 1

Along with the advent of deep neural networks came various methods of exploitation, such as fooling the classifier or contaminating its training data. Another such attack is known as model extraction, where provided API access to some black box neural network, the adversary extracts the underlying model. This is done by querying the model in such a way that the underlying neural network provides enough information to the adversary to be reconstructed. While several works have achieved impressive results with neural network extraction in the propositional domain, this problem has not yet been considered over the relational domain, where data samples are no longer considered to be independent and identically distributed (iid). Graph Neural Networks (GNNs) are a popular deep learning framework to perform machine learning tasks over relational data. In this work, we formalize an instance of GNN extraction, present a solution with preliminary results, and discuss our assumptions and future directions.

Comments: AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications (DLGMA), 2020
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1905.04497 [cs.LG] (Published 2019-05-11)
Stability Properties of Graph Neural Networks
arXiv:1910.12241 [cs.LG] (Published 2019-10-27)
Pre-train and Learn: Preserve Global Information for Graph Neural Networks
arXiv:2006.06830 [cs.LG] (Published 2020-06-11)
Data Augmentation for Graph Neural Networks