Exploiting Type I Adversarial Examples to Hide Data Information: A New Privacy-Preserving Approach

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE(2024)

引用 0|浏览9
暂无评分
摘要
Deep neural networks (DNNs) are sensitive to adversarial examples which are generated by corrupting benign examples with imperceptible perturbations, or have significant changes but can still achieve original prediction results. The latter case is termed as the Type I adversarial example which, however, has limited attention in the literature. In this paper, we introduce two methods, termed HRG and GAG, to generate Type I adversarial examples and attempt to apply them to the privacy-preserving Machine Learning as a Service (MLaaS). Existing methods for the privacy-preserving MLaaS are mostly based on cryptographic techniques, which often incur additional communication and computation overhead, while using Type I adversarial examples to hide users' privacy data is a brand-new exploration. Specifically, HRG utilizes the high-level representations of DNNs to guide generators, and GAG leverages the generative adversarial network to transform original images. Our solution does not involve any model modifications and allows DNNs to run directly on transformed data, thus arousing no additional communication and computation overhead. Extensive experiments on MNIST, CIFAR-10, and ImageNet show that HRG can perfectly hide images into noise and achieve similar accuracy as the original accuracy, and GAG can generate natural images that are completely different from the original images with a small loss of accuracy.
更多
查看译文
关键词
Type I adversarial examples,deep neural networks,privacy-preserving MLaaS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要