Remote Sensing, Vol. 18, Pages 44: RSSRGAN: A Residual Separable Generative Adversarial Network for Remote Sensing Image Super-Resolution Reconstruction


Remote Sensing, Vol. 18, Pages 44: RSSRGAN: A Residual Separable Generative Adversarial Network for Remote Sensing Image Super-Resolution Reconstruction

Remote Sensing doi: 10.3390/rs18010044

Authors:
Xiangyu Fu
Dongyang Wu
Shanshan Xu

With the advancement of remote sensing technology, high-resolution images are widely used in the field of computer vision. However, image quality is often degraded due to hardware limitations and environmental interference. This paper proposes a Residual Separable Super-Resolution Reconstruction Generative Adversarial Network (RSSRGAN) for remote sensing image super-resolution. The model aims to enhance the resolution and edge information of low-resolution images without hardware improvements. The main contributions include (1) designing an optimized generator network by improving the residual dense network and introducing depthwise separable convolutions to remove BN layers, thereby increasing training efficiency—two PatchGAN discriminators are designed to enhance multi-scale detail capture—and (2) introducing content loss and joint perceptual loss on top of adversarial loss to improve global feature representation. Experimental results show that compared to the widely used SRGAN model in remote sensing (exemplified by the satellite-specific SRGAN in this study), this model improves PSNR by approximately 18.8%, SSIM by 8.0%, reduces MSE by 3.6%, and enhances the PI metric by 13.6%. It effectively enhances object information, color, and brightness in images, making it more suitable for remote sensing image super-resolution.



Source link

Xiangyu Fu www.mdpi.com