B 3 : Backdoor Attacks against Black-box Machine Learning Models

ACM transactions on privacy and security(2023)

引用 0|浏览3
暂无评分
摘要
Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. In this article, from a malicious model provider perspective, we propose a black-box backdoor attack, named B 3 , where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B 3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B 3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.
更多
查看译文
关键词
backdoor attacks,machine learning,<sup>3</sup>,black-box
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要