Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks
arxiv(2023)
摘要
It is well-known that deep learning models are vulnerable to small input
perturbations. Such perturbed instances are called adversarial examples.
Adversarial examples are commonly crafted to fool a model either at training
time (poisoning) or test time (evasion). In this work, we study the symbiosis
of poisoning and evasion. We show that combining both threat models can
substantially improve the devastating efficacy of adversarial attacks.
Specifically, we study the robustness of Graph Neural Networks (GNNs) under
structure perturbations and devise a memory-efficient adaptive end-to-end
attack for the novel threat model using first-order optimization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要