Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking
CoRR(2024)
摘要
In Virtual Reality (VR), adversarial attack remains a significant security
threat. Most deep learning-based methods for physical and digital adversarial
attacks focus on enhancing attack performance by crafting adversarial examples
that contain large printable distortions that are easy for human observers to
identify. However, attackers rarely impose limitations on the naturalness and
comfort of the appearance of the generated attack image, resulting in a
noticeable and unnatural attack. To address this challenge, we propose a
framework to incorporate style transfer to craft adversarial inputs of natural
styles that exhibit minimal detectability and maximum natural appearance, while
maintaining superior attack capabilities.
更多查看译文
关键词
Computing methodologies—Artificial intelligence—Computer vision,Computing methodologies—Computer graphicsy—Image manipulation—Image processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要