Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization

IEEE Transactions on Dependable and Secure Computing(2021)

引用 85|浏览226
暂无评分
摘要
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where hidden features (patterns) trained to a normal model, which is only activated by some specific input (called triggers), trick the model into producing unexpected behavior. In this article, we create covert and scattered triggers for backdoor attacks, invisible backdoors, where triggers can fool both...
更多
查看译文
关键词
Training,Data models,Machine learning,Perturbation methods,Neural networks,Inspection,Image color analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要