Revitalizing Legacy Video Content: Deinterlacing with Bidirectional Information Propagation
CoRR(2023)
摘要
Due to old CRT display technology and limited transmission bandwidth, early
film and TV broadcasts commonly used interlaced scanning. This meant each field
contained only half of the information. Since modern displays require full
frames, this has spurred research into deinterlacing, i.e. restoring the
missing information in legacy video content. In this paper, we present a
deep-learning-based method for deinterlacing animated and live-action content.
Our proposed method supports bidirectional spatio-temporal information
propagation across multiple scales to leverage information in both space and
time. More specifically, we design a Flow-guided Refinement Block (FRB) which
performs feature refinement including alignment, fusion, and rectification.
Additionally, our method can process multiple fields simultaneously, reducing
per-frame processing time, and potentially enabling real-time processing. Our
experimental results demonstrate that our proposed method achieves superior
performance compared to existing methods.
更多查看译文
关键词
legacy video content,information propagation,bidirectional
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要