Memory-Efficient Models for Scene Text Recognition via Neural Architecture Search

2020 IEEE Winter Applications of Computer Vision Workshops (WACVW)(2020)

引用 3|浏览2
暂无评分
摘要
Meta-learning techniques based on neural architecture search (NAS) show excellent performance in the design of learning models used in deep neural networks. In particular, when NAS is applied to design a convolutional neural network (CNN) for image recognition, the performance of the network when evaluating public benchmark datasets such as CIFAR10 and ImageNet exceeds that of hand-designed models. Nevertheless, there are very few cases wherein NAS has been applied to real-world problems, i.e. recognition problems with a limited dataset. We proposed a method in which the NAS technique does not require a proxy task for the scene text recognition (STR) framework to apply the NAS method to a new image recognition field. Therefore, we proposed an architecture space for CNN-based modules in the STR framework and applied the ProxylessNAS method, enabling end-to-end training while meta learners design a new model that requires only a single commonly used GPU (approximately 100 GPU hours). To evaluate the STR model obtained by the proposed NAS method, seven STR benchmark datasets were used. Finally, the obtained model could achieve a performance similar to that of the ideal model in terms of accuracy and number of parameters. We thus confirm that the model design based on NAS can be effectively applied to STR scenarios.
更多
查看译文
关键词
memory-efficient models,neural architecture search,meta-learning techniques,learning models,deep neural networks,convolutional neural network,public benchmark datasets,hand-designed models,NAS technique,scene text recognition framework,image recognition field,architecture space,CNN-based modules,ProxylessNAS method,end-to-end training,meta learners design,ideal model,model design,STR benchmark datasets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要