When Deep Fool Meets Deep Prior: Adversarial Attack on Super-Resolution Network.

MM '18: ACM Multimedia Conference Seoul Republic of Korea October, 2018(2018)

引用 30|浏览77
暂无评分
摘要
This paper investigates the vulnerability of the deep prior used in deep learning based image restoration. In particular, the image super-resolution, which relies on the strong prior information to regularize the solution space and plays important roles in the image pre-processing for future viewing and analysis, is shown to be vulnerable to the well-designed adversarial examples. We formulate the adversarial example generation process as an optimization problem, and given super-resolution model three different types of attack are designed based on the subsequent tasks: (i) style transfer attack; (ii) classification attack; (iii) caption attack. Another interesting property of our design is that the attack is hidden behind the super-resolution process, such that the utilization of low resolution images is not significantly influenced. We show that the vulnerability to adversarial examples could bring risks to the pre-processing modules such as super-resolution deep neural network, which is also of paramount significance for the security of the whole system. Our results also shed light on the potential security issues of the pre-processing modules, and raise concerns regarding the corresponding countermeasures for adversarial examples.
更多
查看译文
关键词
Deep prior, super-resolution, adversarial attack, style transfer, image classification, caption
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要