arXiv Analytics

Sign in

arXiv:2310.10362 [cs.LG]AbstractReferencesReviewsResources

Self-Pro: Self-Prompt and Tuning Framework for Graph Neural Networks

Chenghua Gong, Xiang Li, Jianxiang Yu, Cheng Yao, Jiaqi Tan, Chengcheng Yu

Published 2023-10-16, updated 2024-05-29Version 2

Graphs have become an important modeling tool for Web applications, and graph neural networks (GNNs) have achieved great success in graph representation learning. However, their performance heavily relies on a large amount of supervision. Recently, ``pre-train, fine-tune'' has become the paradigm to address the issues of label dependency and poor generalization. However, the pre-training strategies vary for graphs with homophily and heterophily, and the objectives for various downstream tasks also differ. This leads to a gap between pretexts and downstream tasks, resulting in ``negative transfer'' and poor performance. Inspired by prompt learning in natural language processing, many studies turn to bridge the gap and fully leverage the pre-trained model. However, existing methods for graph prompting are tailored to homophily, neglecting inherent heterophily on graphs. Meanwhile, most of them rely on randomly initialized prompts, which negatively impact on the stability. Therefore, we propose Self-Prompt, a prompting framework for graphs based on the model and data itself. We first introduce asymmetric graph contrastive learning as pretext to address heterophily and align the objectives of pretext and downstream tasks. Then we reuse the component from pre-training as the self adapter and introduce self-prompts based on graph itself for task adaptation. Finally, we conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority. We provide our codes at \url{https://github.com/gongchenghua/Self-Pro}.

Comments: Accepted at ECML-PKDD 2024
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2210.12598 [cs.LG] (Published 2022-10-23)
GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections
arXiv:2301.10956 [cs.LG] (Published 2023-01-26)
Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure
arXiv:2309.17002 [cs.LG] (Published 2023-09-29)
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen et al.