Efficient text-based evolution algorithm to hard-label adversarial attacks on text

Journal of King Saud University: Computer and Information Sciences(2023)

引用 0|浏览19
暂无评分
摘要
Deep neural networks that play a pivotal role in fields such as images, text, and audio are vulnerable to adversarial attacks. In current textual adversarial attacks, the vast majority are configured with a black-box soft-label which is achieved by the gradient information or confidence of the model. Therefore, it becomes challenging and realistic to implement adversarial attacks using only the predicted top labels of the hard-label model. Existing methods to implement hard-label adversarial attacks use population-based genetic optimization algorithms. However, this approach requires significant query consumption, which is a considerable shortcoming. To solve this problem, we propose a new textual black-box hard-label adversarial attack algorithm based on the idea of differential evolution of populations, called the text-based differential evolution (TDE) algorithm. First, the method will judge the importance of the words of the initial rough adversarial examples, according to which only the keywords in the text sentence will be operated, and the rest of the words will be gradually replaced with the original words so as to reduce the words in the sentence in which the replacement occurs. Our method judges the quality of semantic similarity of the adversarial examples in the replacement process and deposits high-quality adversarial example individuals into the population. Secondly, the optimization process of adversarial examples is combined and optimized according to the word importance. Compared with existing methods based on genetic algorithm guidance, our method avoids a large number of meaningless repetitive queries and significantly improves the overall attack efficiency of the algorithm and the semantic quality of the generated adversarial examples. We experimented with multiple datasets on three text tasks of sentiment classification, natural language inference, and toxic comment, and also perform experimental comparisons on models and APIs in realistic scenarios. For example, in the Google Cloud commercial API adversarial attack experiment, compared to the existing hard-label method, our method reduces the average number of queries required for the attack from 6986 to 176, and increases semantic similarity from 0.844 to 0.876. It is shown through extensive experimental data that our approach not only significantly reduces the number of queries, but also significantly outperforms existing methods in terms of the quality of adversarial examples.
更多
查看译文
关键词
Natural language processing,Machine learning,Language model,Adversarial attack,Black-box attack,Hard-label
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要