Remote Sensing, Vol. 17, Pages 1847: AdaptiveSpectral Correlation Learning Neural Network for Hyperspectral Image Classification


Remote Sensing, Vol. 17, Pages 1847: AdaptiveSpectral Correlation Learning Neural Network for Hyperspectral Image Classification

Remote Sensing doi: 10.3390/rs17111847

Authors:
Wei-Ye Wang
Yang-Jun Deng
Yuan-Ping Xu
Ben-Jun Guo
Chao-Long Zhang
Heng-Chao Li

Hyperspectral imagery (HSI), with its rich spectral information across continuous wavelength bands, has become indispensable for fine-grained land cover classification in remote sensing applications. Although some existing deep neural networks have exploited the rich spectral information contained in HSIs for land cover classification by designing some adaptive learning modules, these modules were usually designed as additional submodules rather than basic structural units for building backbones, and they failed to adaptively model the spectral correlations between adjacent spectral bands and nonadjacent bands from a local and global perspective. To address these issues, a new adaptive spectral-correlation learning neural network (ASLNN) is proposed for HSI classification. Taking advantage of the group convolutional and ConvLSTM3D layers, a new adaptive spectral correlation learning block (ASBlock) is designed as a basic network unit to construct the backbone of a spatial–spectral feature extraction model for learning the spectral information, extracting the spectral-enhanced deep spatial–spectral features. Then, a 3D Gabor filter is utilized to extract heterogeneous spatial–spectral features, and a simple but effective gated asymmetric fusion block (GAFBlock) is further built to align and integrate these two heterogeneous features, thereby achieving competitive classification performance for HSIs. Experimental results from four common hyperspectral data sets validate the effectiveness of the proposed method. Specifically, when 10, 10, 10 and 25 samples from each class are selected for training, ASLNN achieves the highest overall accuracy (OA) of 81.12%, 85.88%, 80.62%, and 97.97% on the four data sets, outperforming other methods with increases of more than 1.70%, 3.21%, 3.78%, and 2.70% in OA, respectively.



Source link

Wei-Ye Wang www.mdpi.com