Remote Sensing, Vol. 17, Pages 1229: MFAFNet: Multi-Scale Feature Adaptive Fusion Network Based on DeepLab V3+ for Cloud and Cloud Shadow Segmentation
Remote Sensing doi: 10.3390/rs17071229
Authors:
Yijia Feng
Zhiyong Fan
Ying Yan
Zhengdong Jiang
Shuai Zhang
The accurate segmentation of clouds and cloud shadows is crucial in meteorological monitoring, climate change research, and environmental management. However, existing segmentation models often suffer from issues such as losing fine details, blurred boundaries, and false positives or negatives. To address these challenges, this paper proposes an improved model based on DeepLab v3+. First, to enhance the model’s ability to extract fine-grained features, a Hybrid Strip Pooling Module (HSPM) is introduced in the encoding stage, effectively preserving local details and reducing information loss. Second, a Global Context Attention Module (GCAM) is incorporated into the Atrous Spatial Pyramid Pooling (ASPP) module to establish pixel-wise long-range dependencies, thereby effectively integrating global semantic information. In the decoding stage, a Three-Branch Adaptive Feature Fusion Module (TB-AFFM) is designed to merge multi-scale features from the backbone network and ASPP. Finally, an innovative loss function is employed in the experiments, significantly improving the accuracy of cloud and cloud shadow segmentation. Experimental results demonstrate that the proposed model outperforms existing methods in cloud and cloud shadow segmentation tasks, achieving more precise segmentation performance.
Source link
Yijia Feng www.mdpi.com