Disttack: Graph Adversarial Attacks Toward Distributed GNN Training
arxiv(2024)
摘要
Graph Neural Networks (GNNs) have emerged as potent models for graph
learning. Distributing the training process across multiple computing nodes is
the most promising solution to address the challenges of ever-growing
real-world graphs. However, current adversarial attack methods on GNNs neglect
the characteristics and applications of the distributed scenario, leading to
suboptimal performance and inefficiency in attacking distributed GNN training.
In this study, we introduce Disttack, the first framework of adversarial
attacks for distributed GNN training that leverages the characteristics of
frequent gradient updates in a distributed system. Specifically, Disttack
corrupts distributed GNN training by injecting adversarial attacks into one
single computing node. The attacked subgraphs are precisely perturbed to induce
an abnormal gradient ascent in backpropagation, disrupting gradient
synchronization between computing nodes and thus leading to a significant
performance decline of the trained GNN. We evaluate Disttack on four large
real-world graphs by attacking five widely adopted GNNs. Compared with the
state-of-the-art attack method, experimental results demonstrate that Disttack
amplifies the model accuracy degradation by 2.75× and achieves speedup
by 17.33× on average while maintaining unnoticeability.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要