Towards Data-Independent Knowledge Transfer in Model-Heterogeneous Federated Learning

IEEE Transactions on Computers(2023)

引用 10|浏览70
暂无评分
摘要
Federated Distillation (FD) extends classic Federated Learning (FL) to a more general training framework that enables model-heterogeneous collaborative learning by Knowledge Distillation (KD) across multiple clients and the server. However, existing KD-based algorithms usually require a set of shared input samples for each client to produce soft-prediction for distillation. Worse still, such a manual selection is accompanied by careful deliberations or prior information on clients’ private data distribution, which is not in line with the privacy-preserving characteristic of classic FL. In this paper, we propose a novel training framework to achieve data-independent knowledge transfer by properly designing a distributed generative adversarial network (GAN) between the server and clients that can synthesize shared feature representations to facilitate the FD training. Specifically, we deploy a generator on the server and reuse each local model as a federated discriminator to form a lightweight efficient distributed GAN that can automatically synthesize simulated global feature representations for distillation. Moreover, since the synthesized feature representations are usually more faithful and homologous with global data distribution, faster and better training convergence can be obtained. Extensive experiments on different tasks and heterogeneous models demonstrate the effectiveness of the proposed framework on model accuracy and communication overhead.
更多
查看译文
关键词
knowledge,transfer,learning,data-independent,model-heterogeneous
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要