A Poisoning Attack Against the Recognition Model Trained by the Data Augmentation Method.

ML4CS (2)(2020)

引用 0|浏览27
暂无评分
摘要
The training model often preprocesses the training set with the data augmentation method. Aiming at this kind of training mode, a poisoning attack scheme is proposed in this paper, which can effectively complete the attack. For the traffic sign recognition system, its decision boundary is changed by the way of data poisoning, so that it would incorrectly recognize the target sample. In this scheme, a “backdoor” belonging to the attacker is added to the toxic sample so that the attacker can manipulate recognition model (i.e., the target sample is classified into expected categories). The attack is difficult to detect, because the victim will take a poison sample as a healthy one. The experimental results show that the scheme can successfully attack the model trained by the data augmentation method, realize the attack function against the selected target, and complete the attack with a high success rate. It is hoped that this work will raise awareness of the important issues of data reliability and data sources.
更多
查看译文
关键词
recognition model,poisoning attack,data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要