RESUMEN
An appropriate detection network is required to extract building information in remote sensing images and to relieve the issue of poor detection effects resulting from the deficiency of detailed features. Firstly, we embed a transposed convolution sampling module fusing multiple normalization activation layers in the decoder based on the SegFormer network. This step alleviates the issue of missing feature semantics by adding holes and fillings, cascading multiple normalizations and activation layers to hold back over-fitting regularization expression and guarantee steady feature parameter classification. Secondly, the atrous spatial pyramid pooling decoding module is fused to explore multi-scale contextual information and to overcome issues such as the loss of detailed information on local buildings and the lack of long-distance information. Ablation experiments and comparison experiments are performed on the remote sensing image AISD, MBD, and WHU dataset. The robustness and validity of the improved mechanism are demonstrated by control groups of ablation experiments. In comparative experiments with the HRnet, PSPNet, U-Net, DeepLabv3+ networks, and the original detection algorithm, the mIoU of the AISD, the MBD, and the WHU dataset is enhanced by 17.68%, 30.44%, and 15.26%, respectively. The results of the experiments show that the method of this paper is superior to comparative methods such as U-Net. Furthermore, it is better for integrity detection of building edges and reduces the number of missing and false detections.
RESUMEN
Hyperspectral image (HSI) classification has always been recognised as a difficult task. It is therefore a research hotspot in remote sensing image processing and analysis, and a number of studies have been conducted to better extract spectral and spatial features. This study aimed to track the variation of the spectrum in hyperspectral images from a sequential data perspective to obtain more distinguishable features. Based on the characteristics of optical flow, this study introduces an optical flow technique for the extraction of spectral flow that denotes the spectral variation and implements a dense optical flow extraction method based on deep matching. Lastly, the extracted spectral flow are combined with the original spectral features and input into a commonly used support vector machine (SVM) classifier to complete the classification. Extensive classification experiments on three benchmark HSI test sets show that the classification accuracy obtained by the spectral flow extracted in this study (SpectralFlow) is higher than traditional spatial feature extraction methods, texture feature extraction methods, and the latest deep-learning-based methods. Furthermore, the proposed method can produce finer classification thematic maps, thereby demonstrating strong practical application potential.