Learning Spatial And Spectral Features Via 2d-1d Generative Adversarial Network For Hyperspectral Image Super-Resolution

2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)(2019)

引用 3|浏览17
暂无评分
摘要
Three-dimensional (3D) convolutional networks have been proven to be able to explore spatial context and spectral information simultaneously for super-resolution (SR). However, such kind of network can't be practically designed very `deep' due to the long training time and GPU memory limitations involved in 3D convolution. Instead, in this paper, spatial context and spectral information in hyperspectral images (HSIs) are explored using Two-dimensional (2D) and Onedimenional (1D) convolution, separately. Therefore, a novel 2D-1D generative adversarial network architecture (2D-1D-HSRGAN) is proposed for SR of HSIs. Specifically, the generator network consists of a spatial network and a spectral network, in which spatial network is trained with the least absolute deviations loss function to explore spatial context by 2D convolution and spectral network is trained with the spectral angle mapper (SAM) loss function to extract spectral information by 1D convolution. Experimental results over two real HSIs demonstrate that the proposed 2D-1D-HSRGAN clearly outperforms several state-of-the-art algorithms.
更多
查看译文
关键词
Hyperspectral images, super-resolution, generative adversarial network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要