GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge
arxiv(2024)
摘要
Federated learning (FL) has emerged as a privacy-preserving machine learning
approach where multiple parties share gradient information rather than original
user data. Recent work has demonstrated that gradient inversion attacks can
exploit the gradients of FL to recreate the original user data, posing
significant privacy risks. However, these attacks make strong assumptions about
the attacker, such as altering the model structure or parameters, gaining batch
normalization statistics, or acquiring prior knowledge of the original training
set, etc. Consequently, these attacks are not possible in real-world scenarios.
To end it, we propose a novel Gradient Inversion attack based on Style
Migration Network (GI-SMN), which breaks through the strong assumptions made by
previous gradient inversion attacks. The optimization space is reduced by the
refinement of the latent code and the use of regular terms to facilitate
gradient matching. GI-SMN enables the reconstruction of user data with high
similarity in batches. Experimental results have demonstrated that GI-SMN
outperforms state-of-the-art gradient inversion attacks in both visual effect
and similarity metrics. Additionally, it also can overcome gradient pruning and
differential privacy defenses.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要