An End-to-End Video Coding Method via Adaptive Vision Transformer

INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE(2024)

引用 0|浏览1
暂无评分
摘要
Deep learning-based video coding methods have demonstrated superior performance compared to classical video coding standards in recent years. The vast majority of the existing deep video coding (DVC) networks are based on convolutional neural networks (CNNs), and their main drawback is that since CNNs are affected by the size of the receptive field, they cannot effectively handle long-range dependencies and local detail recovery. Therefore, how to better capture and process the overall structure as well as local texture information in the video coding task is the core issue. Notably, the transformer employs a self-attention mechanism that captures dependencies between any two positions in the input sequence without being constrained by distance limitations. This is an effective solution to the problem described above. In this paper, we propose end-to-end transformer-based adaptive video coding (TAVC). First, we compress the motion vector and residuals through a compression network built on the vision transformer (ViT) and design the motion compensation network based on ViT. Second, based on the requirement of video coding to adapt to different resolution inputs, we introduce a position encoding generator (PEG) as adaptive position encoding (APE) to maintain its translation invariance across different resolution video coding tasks. The experiment shows that for multiscale structural similarity index measurement (MS-SSIM) metrics, this method exhibits significant performance gaps compared to conventional engineering codecs, such as x 264, x 265, and VTM-15.2. We also achieved a good performance improvement compared to the CNN-based DVC methods. In the case of peak signal-to-noise ratio (PSNR) evaluation metrics, TAVC also achieves good performance.
更多
查看译文
关键词
Deep video coding,Swin transformer,motion estimation,position encoding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要