SAFER: Sparse secure Aggregation for FEderated leaRning

arxiv(2021)

引用 0|浏览6
暂无评分
摘要
Federated learning enables one to train a common machine learning model across separate, privately-held datasets via distributed model training. During federated model training, only intermediate model parameters are transmitted to a central server which aggregates these models to create a new common model, thus exposing only intermediate model parameters rather than the training data itself. However, some attacks (e.g. membership inference) are able to infer properties of private data from these intermediate model parameters. Hence, performing the aggregation of these client-specific model parameters in a secure way is required. Additionally, the communication cost is often the bottleneck of the federated systems, especially for large neural networks. So, limiting the number and the size of communications is necessary to efficiently train large neural architectures. In this article, we present an efficient and secure protocol for performing secure aggregations over compressed model updates in the context of collaborative, few-party federated learning, a context common in the medical, healthcare, and biotechnical use-cases of federated systems. By making compression-based federated techniques amenable to secure computation, we develop a secure aggregation protocol between multiple servers with very low communication and computation costs and without preprocessing overhead. Our experiments demonstrate the efficiency of this new approach for secure federated training of deep convolutional neural networks.
更多
查看译文
关键词
federated learning,aggregation,sparse
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要