Self-supervised Pre-training of Text Recognizers
arxiv(2024)
摘要
In this paper, we investigate self-supervised pre-training methods for
document text recognition. Nowadays, large unlabeled datasets can be collected
for many research tasks, including text recognition, but it is costly to
annotate them. Therefore, methods utilizing unlabeled data are researched. We
study self-supervised pre-training methods based on masked label prediction
using three different approaches – Feature Quantization, VQ-VAE, and
Post-Quantized AE. We also investigate joint-embedding approaches with VICReg
and NT-Xent objectives, for which we propose an image shifting technique to
prevent model collapse where it relies solely on positional encoding while
completely ignoring the input image. We perform our experiments on historical
handwritten (Bentham) and historical printed datasets mainly to investigate the
benefits of the self-supervised pre-training techniques with different amounts
of annotated target domain data. We use transfer learning as strong baselines.
The evaluation shows that the self-supervised pre-training on data from the
target domain is very effective, but it struggles to outperform transfer
learning from closely related domains. This paper is one of the first
researches exploring self-supervised pre-training in document text recognition,
and we believe that it will become a cornerstone for future research in this
area. We made our implementation of the investigated methods publicly available
at https://github.com/DCGM/pero-pretraining.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要