Stealthy dynamic backdoor attack against neural networks for image classification

APPLIED SOFT COMPUTING(2023)

引用 0|浏览3
暂无评分
摘要
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which are external entity model providers that can inject backdoors into the network for illegal purposes. Current attack methods rely on manipulated images with embedded triggers to infiltrate carrier networks, but they suffer from detectable distortions and limited integration into neural networks. To delve further into the potential vulnerabilities of these models, this study introduces an innovative backdoor attack strategy, leveraging deep learning steganography through a Generative Adversarial Network (GAN). Our approach utilizes steganography to create manipulated images, capitalizing on the unique sensitivity of neural networks to minute perturbations. The network is then trained de novo with these manipulated images, creating a backdoor-infused model. Fundamentally, our method harnesses the pronounced sensitivity of DNNs to nuanced alterations. Experimental outcomes substantiate that our backdoor can be effectively integrated into the models, yielding high attack success rates. Notably, it also adeptly circumvents both contemporary state-of-the-art defense mechanisms and human inspection. Our source code is publicly accessible at https://github.com/DLAIResearch/NNSDB.
更多
查看译文
关键词
Information hiding,Backdoor attack,Deep Neural Network,Generative Adversarial Network,AI security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要