Efficient End-to-End Visual Document Understanding with Rationale Distillation
arxiv(2023)
Abstract
Understanding visually situated language requires interpreting complex
layouts of textual and visual elements. Pre-processing tools, such as optical
character recognition (OCR), can map document image inputs to textual tokens,
then large language models (LLMs) can reason over text. However, such methods
have high computational and engineering complexity. Can small pretrained
image-to-text models accurately understand visual documents through similar
recognition and reasoning steps instead? We propose Rationale Distillation
(RD), which incorporates the outputs of OCR tools, LLMs, and larger multimodal
models as intermediate "rationales", and trains a small student model to
predict both rationales and answers. On three visual document understanding
benchmarks representing infographics, scanned documents, and figures, our
Pix2Struct (282M parameters) student model finetuned with RD outperforms the
base model by 4-5
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined