Text in the Dark: Extremely Low-Light Text Image Enhancement
arxiv(2024)
摘要
Extremely low-light text images are common in natural scenes, making scene
text detection and recognition challenging. One solution is to enhance these
images using low-light image enhancement methods before text extraction.
However, previous methods often do not try to particularly address the
significance of low-level features, which are crucial for optimal performance
on downstream scene text tasks. Further research is also hindered by the lack
of extremely low-light text datasets. To address these limitations, we propose
a novel encoder-decoder framework with an edge-aware attention module to focus
on scene text regions during enhancement. Our proposed method uses novel text
detection and edge reconstruction losses to emphasize low-level scene text
features, leading to successful text extraction. Additionally, we present a
Supervised Deep Curve Estimation (Supervised-DCE) model to synthesize extremely
low-light images based on publicly available scene text datasets such as
ICDAR15 (IC15). We also labeled texts in the extremely low-light See In the
Dark (SID) and ordinary LOw-Light (LOL) datasets to allow for objective
assessment of extremely low-light image enhancement through scene text tasks.
Extensive experiments show that our model outperforms state-of-the-art methods
in terms of both image quality and scene text metrics on the widely-used LOL,
SID, and synthetic IC15 datasets. Code and dataset will be released publicly at
https://github.com/chunchet-ng/Text-in-the-Dark.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要