谷歌浏览器插件
订阅小程序
在清言上使用

Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking

CoRR(2024)

引用 0|浏览18
暂无评分
摘要
In Virtual Reality (VR), adversarial attack remains a significant security threat. Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance by crafting adversarial examples that contain large printable distortions that are easy for human observers to identify. However, attackers rarely impose limitations on the naturalness and comfort of the appearance of the generated attack image, resulting in a noticeable and unnatural attack. To address this challenge, we propose a framework to incorporate style transfer to craft adversarial inputs of natural styles that exhibit minimal detectability and maximum natural appearance, while maintaining superior attack capabilities.
更多
查看译文
关键词
Computing methodologies—Artificial intelligence—Computer vision,Computing methodologies—Computer graphicsy—Image manipulation—Image processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要