BAGEL: Backdoor Attacks against Federated Contrastive Learning.

CoRR(2023)

引用 0|浏览1
暂无评分
摘要
Federated Contrastive Learning (FCL) is an emerging privacy-preserving paradigm in distributed learning for unlabeled data. In FCL, distributed parties collaboratively learn a global encoder with unlabeled data, and the global encoder could be widely used as a feature extractor to build models for many downstream tasks. However, FCL is also vulnerable to many security threats (e.g., backdoor attacks) due to its distributed nature, which are seldom investigated in existing solutions. In this paper, we study the backdoor attack against FCL as a pioneer research, to illustrate how backdoor attacks on distributed local clients act on downstream tasks. Specifically, in our system, malicious clients can successfully inject a backdoor into the global encoder by uploading poisoned local updates, thus downstream models built with this global encoder will also inherit the backdoor. We also investigate how to inject backdoors into multiple downstream models, in terms of two different backdoor attacks, namely the \textit{centralized attack} and the \textit{decentralized attack}. Experiment results show that both the centralized and the decentralized attacks can inject backdoors into downstream models effectively with high attack success rates. Finally, we evaluate two defense methods against our proposed backdoor attacks in FCL, which indicates that the decentralized backdoor attack is more stealthy and harder to defend.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要