On Evaluating Neural Network Backdoor Defenses

arxiv(2020)

引用 0|浏览82
暂无评分
摘要
Deep neural networks (DNNs) demonstrate superior performance in various fields, including scrutiny and security. However, recent studies have shown that DNNs are vulnerable to backdoor attacks. Several defenses were proposed in the past to defend DNNs against such backdoor attacks. In this work, we conduct a critical analysis and identify common pitfalls in these existing defenses, prepare a comprehensive database of backdoor attacks, conduct a side-by-side evaluation of existing defenses against this database. Finally, we layout some general guidelines to help researchers develop more robust defenses in the future and avoid common mistakes from the past.
更多
查看译文
关键词
neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要