Combining split and federated architectures for efficiency and privacy in deep learning

CoNEXT '20: The 16th International Conference on emerging Networking EXperiments and Technologies Barcelona Spain December, 2020(2020)

引用 17|浏览96
暂无评分
摘要
Distributed learning systems are increasingly being adopted for a variety of applications as centralized training becomes unfeasible. A few architectures have emerged to divide and conquer the computational load, or to run privacy-aware deep learning models, using split or federated learning. Each architecture has benefits and drawbacks. In this work, we compare the efficiency and privacy performance of two distributed learning architectures that combine the principles of split and federated learning, trying to get the best of both. In particular, our design goal is to reduce the computational power required by each client in Federated Learning and to parallelize Split Learning. We share some initial lessons learned from our implementation that leverages the PySyft and PyGrid libraries.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要