Privacy-Preserving Byzantine-Robust Federated Learning

COMPUTER STANDARDS & INTERFACES(2022)

引用 14|浏览26
暂无评分
摘要
Robustness of federated learning has become one of the major concerns since some Byzantine adversaries, who may upload false data owning to unreliable communication channels, corrupted hardware or even malicious attacks, might be concealed in the group of the distributed worker. Meanwhile, it has been proved that membership attacks and reverse attacks against federated learning can lead to privacy leakage of the training data. To address the aforementioned challenges, we propose a privacy-preserving Byzantine-robust federated learning scheme (PBFL) which takes both the robustness of federated learning and the privacy of the workers into account. PBFL is constructed from an existing Byzantine-robust federated learning algorithm and combined with distributed Paillier encryption and zero-knowledge proof to guarantee privacy and filter out anomaly parameters from Byzantine adversaries. Finally, we prove that our scheme provides a higher level of privacy protection compared to the previous Byzantine-robust federated learning algorithms.
更多
查看译文
关键词
Federated learning, Privacy, Homomorphic encryption, Zero-knowledge proof
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要