APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning.

IEEE Trans. Inf. Forensics Secur.(2023)

引用 3|浏览27
暂无评分
摘要
Federated learning (FL) is an emerging paradigm of privacy-preserving distributed machine learning that effectively deals with the privacy leakage problem by utilizing cryptographic primitives. However, how to prevent poisoning attacks in distributed situations has recently become a major FL concern. Indeed, an adversary can manipulate multiple edge nodes and submit malicious gradients to disturb the global model's availability. Currently, most existing works rely on an Independently Identical Distribution (IID) situation and identify malicious gradients using plaintext. However, we demonstrates that current works cannot handle the data heterogeneity scenario challenges and that publishing unencrypted gradients imposes significant privacy leakage problems. Therefore, we develop APFed, a layered privacy-preserving defense mechanism that significantly mitigates the effects of poisoning attacks in data heterogeneity scenarios. Specifically, we exploit HE as the underlying technique and employ the median coordinate as the benchmark. Subsequently, we propose a secure cosine similarity scheme to identify poisonous gradients, and we innovatively use clustering as part of the defense mechanism and develop a hierarchical aggregation that enhances our scheme's robustness in IID and non-IID scenarios. Extensive evaluations on two benchmark datasets demonstrate that APFed outperforms existing defense strategies while reducing the communication overhead by replacing the expensive remote communication method with inexpensive intra-cluster communication.
更多
查看译文
关键词
attacks,learning,apfed,anti-poisoning,privacy-preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要