Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding

Haoli Bai,Zhiguang Liu, Xiangjian Meng,Wentao Li,Shuang Liu,Nian Xie,Rongfu Zheng, Handong Wang, Li’an Hou, Wei Jiang,Xin Jiang, Qun Li

arXiv (Cornell University)(2022)

引用 0|浏览3
暂无评分
摘要
Unsupervised pre-training on millions of digital-born or scanned documents has shown promising advances in visual document understanding~(VDU). While various vision-language pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far. A document textline usually contains words that are spatially and semantically correlated, which can be easily obtained from OCR engines. In this paper, we propose Wukong-Reader, trained with new pre-training objectives to leverage the structural knowledge nested in document textlines. We introduce textline-region contrastive learning to achieve fine-grained alignment between the visual regions and texts of document textlines. Furthermore, masked region modeling and textline-grid matching are also designed to enhance the visual and layout representations of textlines. Experiments show that our Wukong-Reader has superior performance on various VDU tasks such as information extraction. The fine-grained alignment over textlines also empowers Wukong-Reader with promising localization ability.
更多
查看译文
关键词
visual document
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要