Detection of Backdoors in Trained Classifiers Without Access to the Training Set

IEEE Transactions on Neural Networks and Learning Systems(2022)

引用 37|浏览65
暂无评分
摘要
With wide deployment of deep neural network (DNN) classifiers, there is great potential for harm from adversarial learning attacks. Recently, a special type of data poisoning (DP) attack, known as a backdoor (or Trojan), was proposed. These attacks do not seek to degrade classification accuracy, but rather to have the classifier learn to classify to a target class 更多
查看译文
关键词
Training,Perturbation methods,Databases,Tuning,Trojan horses,Toxicology,Testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要