Understanding Universal Adversarial Attack and Defense on Graph

International Journal on Semantic Web and Information Systems(2022)

引用 5|浏览11
暂无评分
摘要
Compared with traditional machine learning models, graph neural networks (GNNs) have distinct advantages in processing unstructured data. However, the vulnerability of GNNs cannot be ignored. Graph universal adversarial attack is a special type of attack on graph which can attack any targeted victim by flipping edges connected to anchor nodes. In this paper, the authors propose the forward-derivative-based graph universal adversarial attack (FDGUA). Firstly, they point out that one node as training data is sufficient to generate an effective continuous attack vector. Then they discretize the continuous attack vector based on forward derivative. FDGUA can achieve impressive attack performance that three anchor nodes can result in attack success rate higher than 80% for the dataset Cora. Moreover, they propose the first graph universal adversarial training (GUAT) to defend against universal adversarial attack. Experiments show that GUAT can effectively improve the robustness of the GNNs without degrading the accuracy of the model.
更多
查看译文
关键词
Class-Discriminative Graph Universal Attack, Graph Adversarial, Graph Neural Networks, Graph Universal Attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要