Batch data recovery from gradients based on generative adversarial networks

Yunbo Huang, Yuwen Chen, José-Fernán Martí­nez-Ortega,Haiyang Yu,Zhen Yang

Neural Computing and Applications(2024)

引用 0|浏览1
暂无评分
摘要
In the federated learning scenario, the private data are kept local, and gradients are shared to train the global model. Because gradients are updated according to the private training data, the features of the data are encoded into gradients. Prior work proved the possibility of reconstructing the private training data based on gradients. However, only a small batch of images can be recovered, and the reconstruction quality, especially against the large batch size of images, is unsatisfactory. To improve the quality of reconstruction of a large batch of images, a generative gradient inversion attack based on a regulation term is designed, which is called fDLG. First, a regulation term that can avoid drastic variations within image regions is proposed, which is based on the cognition that changes between image pixels are gradual. The proposed regulation term encourages the synthesized dummy image to be piece-wise smooth. Second, generative adversarial networks are trained to improve the quality of the attack with the global model used as a discriminator. Simulation shows that large batches of images (128 images on CIFAR100, 256 images on MNIST) can be faithfully reconstructed at high resolution, and even large images from ImageNet can be reconstructed.
更多
查看译文
关键词
Data leakage,Federated learning,Deep learning,Data privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要