A General Method to Incorporate Spatial Information into Loss Functions for GAN-based Super-resolution Models
CoRR(2024)
摘要
Generative Adversarial Networks (GANs) have shown great performance on
super-resolution problems since they can generate more visually realistic
images and video frames. However, these models often introduce side effects
into the outputs, such as unexpected artifacts and noises. To reduce these
artifacts and enhance the perceptual quality of the results, in this paper, we
propose a general method that can be effectively used in most GAN-based
super-resolution (SR) models by introducing essential spatial information into
the training process. We extract spatial information from the input data and
incorporate it into the training loss, making the corresponding loss a
spatially adaptive (SA) one. After that, we utilize it to guide the training
process. We will show that the proposed approach is independent of the methods
used to extract the spatial information and independent of the SR tasks and
models. This method consistently guides the training process towards generating
visually pleasing SR images and video frames, substantially mitigating
artifacts and noise, ultimately leading to enhanced perceptual quality.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要