Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming

2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)(2023)

引用 1|浏览19
暂无评分
摘要
Model reprogramming (MR) is an emerging and powerful technique that provides cross-domain machine learning by enabling a model that is well-trained on some source task to be used for a different target task without finetuning the model weights. In this work, we propose Reprogrammable-FL, the first framework adapting MR to the setting of differentially private federated learning (FL), and demonstrate that it significantly improves the utility-privacy tradeoff compared to standard transfer learning methods (full/partial finetuning) and training from scratch in FL. Experimental results on several deep neural networks and datasets show up to over 60% accuracy improvement given the same privacy budget. The code repository can be found at https://github.com/IBM/reprogrammble-FL.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络