A security-friendly privacy-preserving solution for federated learning.

Comput. Commun.(2023)

引用 0|浏览7
暂无评分
摘要
Federated learning is a privacy-aware collaborative machine learning method where the clients collaborate on constructing a global model by performing local model training using their training data and sending the local model updates to the server. Although it enhances privacy by letting the clients collaborate without sharing their training data, it is still prone to sophisticated privacy attacks because of possible information leakage from the local model updates sent to the server. To prevent such attacks, generally secure aggregation protocols are proposed so that the server will not be able to access the individual local model updates but the aggregated result. However, such secure aggregation approaches may not allow the execution of security mechanisms against some security attacks to model training, such as poisoning and backdoor attacks, because the server cannot access the individual local model updates and; therefore, cannot analyze them to detect anomalies resulting from these attacks. Thus, solutions that satisfy privacy and security at the same time or new privacy-preserving solutions that allow the server to execute some analysis on the local model updates without violating privacy are needed for federated learning. In this paper, we introduce a novel security-friendly privacy solution for federated learning based on multi-hop communication to hide clients’ identities. Our solution ensures that the forwardee clients in the path between the source client and the server cannot execute malicious activities by altering model updates and contributing to the global model construction with more than one local model update in one FL round. We then propose two different approaches to make the solution also robust against possible malicious packet drop behaviors by the forwardee clients.
更多
查看译文
关键词
Federated learning, Privacy, Security attacks, Poisoning attacks, Multi-hop communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要