Detection and Attribution of Models Trained on Generated Data

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览10
暂无评分
摘要
Generative Adversarial Networks (GANs) have become widely used in model training, as they can improve performance and/or protect sensitive information by generating data. However, this also raises potential risks, as malicious GANs may compromise or sabotage models by poisoning their training data. Therefore, it is important to verify the origin of a model’s training data for accountability purposes. In this work, we take the first step in the forensic analysis of models trained on GAN-generated data. Specifically, we first detect whether a model is trained on GAN-generated or real data. We then attribute these models, trained on GAN-generated data, to their respective source GANs. We conduct extensive experiments on three datasets, using four popular GAN architectures and four common model architectures. Empirical results show the remarkable performance of our detection and attribution methods. Furthermore, we conduct a more in-depth study and reveal that models trained on various data sources exhibit different decision boundaries and behaviours.
更多
查看译文
关键词
Generative Adversarial Networks (GANs),GAN-trained models,forensic analysis,accountability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要