FedGR: A Lossless-Obfuscation Approach for Secure Federated Learning

2021 IEEE Global Communications Conference (GLOBECOM)(2021)

引用 1|浏览29
暂无评分
摘要
Federated learning is a promising new technology in the field of artificial intelligence. However, the unprotected model gradient parameters in federated learning may reveal sensitive participants information. To address this problem, we present a secure federated learning framework called FedGR. We use Paillier homomorphic encryption to design a new gradient security replacement algorithm, which eliminates the connections between gradient parameters and user sensitive data. In addition, we revisit the previous work by Aono and Hayashi(IEEE TIFS 2017) and show that, with their method, the user's local computing burden is too heavy. We then proved FedGR has the following characteristics to solve this problem: 1) The system does not leak any information to the server. 2) Compared with that of ordinary deep learning systems, the accuracy of federated training results yielded by our system remains unchanged. 3)The proposed approach greatly reduces the user's local computing overhead.
更多
查看译文
关键词
Federated Learning,Lossless-Obfuscation,Privacy-Preserving,Homomorphic Encryption
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要