arXiv Analytics

Sign in

arXiv:1905.06018 [cs.LG]AbstractReferencesReviewsResources

Can Graph Neural Networks Go "Online"? An Analysis of Pretraining and Inference

Lukas Galke, Iacopo Vagliano, Ansgar Scherp

Published 2019-05-15Version 1

Large-scale graph data in real-world applications is often not static but dynamic, i. e., new nodes and edges appear over time. Current graph convolution approaches are promising, especially, when all the graph's nodes and edges are available during training. When unseen nodes and edges are inserted after training, it is not yet evaluated whether up-training or re-training from scratch is preferable. We construct an experimental setup, in which we insert previously unseen nodes and edges after training and conduct a limited amount of inference epochs. In this setup, we compare adapting pretrained graph neural networks against retraining from scratch. Our results show that pretrained models yield high accuracy scores on the unseen nodes and that pretraining is preferable over retraining from scratch. Our experiments represent a first step to evaluate and develop truly online variants of graph neural networks.

Comments: 5 pages, 1 figure, Representation Learning on Graphs and Manifolds Workshop of the International Conference on Learning Representations (ICLR), 2019
Categories: cs.LG, stat.ML
Related articles:
arXiv:2002.11501 [cs.LG] (Published 2020-02-25)
Dual Graph Representation Learning
arXiv:2302.04451 [cs.LG] (Published 2023-02-09)
Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on Graph Diffusion
arXiv:2312.08651 [cs.LG] (Published 2023-12-14)
Towards Inductive Robustness: Distilling and Fostering Wave-induced Resonance in Transductive GCNs Against Graph Adversarial Attacks