What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images

CVPR(2020)

引用 36|浏览234
暂无评分
摘要
The research on scene text recognition (STR) has made remarkable progress in recent years with the development of deep neural networks (DNNs). Recent studies on adversarial attack have verified that a DNN model designed for non-sequential tasks (e.g., classification, segmentation and retrieval) can be easily fooled by adversarial examples. Actually, STR is an application highly related to security issues. However, there are few studies considering the safety and reliability of STR models that make sequential prediction. In this paper, we make the first attempt in attacking the state-of-the-art DNN-based STR models. Specifically, we propose a novel and efficient optimization-based method that can be naturally integrated to different sequential prediction schemes, i.e., connectionist temporal classification (CTC) and attention mechanism. We apply our proposed method to five state-of-the-art STR models with both targeted and untargeted attack modes, the comprehensive results on 7 real-world datasets and 2 synthetic datasets consistently show the vulnerability of these STR models with a significant performance drop. Finally, we also test our attack method on a real-world STR engine of Baidu OCR, which demonstrates the practical potentials of our method.
更多
查看译文
关键词
security issues,optimization-based method,attention mechanism,targeted attack modes,untargeted attack modes,real-world STR engine,fooling scene text recognition models,adversarial text images,deep neural networks,adversarial attack,nonsequential tasks,sequential prediction schemes,DNN-based STR models,connectionist temporal classification,CTC,Baidu OCR
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要