Stealthy 3D Poisoning Attack on Video Recognition Models

IEEE Transactions on Dependable and Secure Computing(2023)

引用 1|浏览23
暂无评分
摘要
Deep Neural Networks (DNNs) have been proven to be vulnerable to poisoning attacks that poison the training data with a trigger pattern and thus manipulate the trained model to misclassify data instances. In this article, we study the poisoning attacks on video recognition models. We reveal the major limitations of the state-of-the-art poisoning attacks on stealthiness and attack effectiveness : (i) the frame-by-frame poisoning trigger may cause temporal inconsistency among the video frames which can be leveraged to easily detect the attack; (ii) the feature collision-based method for crafting poisoned videos could lack both generalization and transferability. To address these limitations, we propose a novel stealthy and efficient poisoning attack framework which has the following advantages: (i) we design a 3D poisoning trigger as natural-like textures, which can maintain temporal consistency and human-imperceptibility; (ii) we formulate an ensemble attack oracle as the optimization objective to craft poisoned videos, which could construct convex polytope-like adversarial subspaces in the feature space and thus gain more generalization; (iii) our poisoning attack can be readily extended to the black-box setting with good transferability. We have experimentally validated the effectiveness of our attack (e.g., up to $95\%$ success rates with only less than $\sim 0.5\%$ poisoned dataset).
更多
查看译文
关键词
Poisoning attack,video recognition,machine learning security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要