Improving Adversarial Transferability of Visual-Language Pre-training Models through Collaborative Multimodal Interaction
CoRR(2024)
摘要
Despite the substantial advancements in Vision-Language Pre-training (VLP)
models, their susceptibility to adversarial attacks poses a significant
challenge. Existing work rarely studies the transferability of attacks on VLP
models, resulting in a substantial performance gap from white-box attacks. We
observe that prior work overlooks the interaction mechanisms between
modalities, which plays a crucial role in understanding the intricacies of VLP
models. In response, we propose a novel attack, called Collaborative Multimodal
Interaction Attack (CMI-Attack), leveraging modality interaction through
embedding guidance and interaction enhancement. Specifically, attacking text at
the embedding level while preserving semantics, as well as utilizing
interaction image gradients to enhance constraints on perturbations of texts
and images. Significantly, in the image-text retrieval task on Flickr30K
dataset, CMI-Attack raises the transfer success rates from ALBEF to TCL,
CLIP_ViT and CLIP_CNN by 8.11
state-of-the-art methods. Moreover, CMI-Attack also demonstrates superior
performance in cross-task generalization scenarios. Our work addresses the
underexplored realm of transfer attacks on VLP models, shedding light on the
importance of modality interaction for enhanced adversarial robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要