Adversarial Attacks and Defenses on Graphs
SIGKDD(2021)
摘要
AbstractDeep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络