Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 31: 3449-3462, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35511853

RESUMO

The difficulties of obtaining sufficient labeled samples have always been one of the factors hindering deep learning models from obtaining high accuracy in hyperspectral image (HSI) classification. To reduce the dependence of deep learning models on training samples, meta learning methods have been introduced, effectively improving the classification accuracy in small sample set scenarios. However, the existing methods based on meta learning still need to construct a labeled source data set with several pre-collected HSIs, and must utilize a large number of labeled samples for meta-training, which is actually time-consuming and labor-intensive. To solve this problem, this paper proposes a novel unsupervised meta learning method with multiview constraints for HSI small sample set classification. Specifically, the proposed method first builds an unlabeled source data set using unlabeled HSIs. Then, multiple spatial-spectral multiview features of each unlabeled sample are generated to construct tasks for unsupervised meta learning. Finally, the designed residual relation network is used for meta-training and small sample set classification based on the voting strategy. Compared with existing supervised meta learning methods for HSI classification, our method can only utilize HSIs without any label for unsupervised meta learning, which significantly reduces the number of requisite labeled samples in the whole classification process. To verify the effectiveness of the proposed method, extensive experiments are carried out on 8 public HSIs in the cross-domain and in-domain classification scenarios. The statistical results demonstrate that, compared with existing supervised meta learning methods and other advanced classification models, the proposed method can achieve competitive or better classification performance in small sample set scenarios.

2.
IEEE Trans Image Process ; 31: 3095-3110, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35404817

RESUMO

In this study, we develop a novel deep hierarchical vision transformer (DHViT) architecture for hyperspectral and light detection and ranging (LiDAR) data joint classification. Current classification methods have limitations in heterogeneous feature representation and information fusion of multi-modality remote sensing data (e.g., hyperspectral and LiDAR data), these shortcomings restrict the collaborative classification accuracy of remote sensing data. The proposed deep hierarchical vision transformer architecture utilizes both the powerful modeling capability of long-range dependencies and strong generalization ability across different domains of the transformer network, which is based exclusively on the self-attention mechanism. Specifically, the spectral sequence transformer is exploited to handle the long-range dependencies along the spectral dimension from hyperspectral images, because all diagnostic spectral bands contribute to the land cover classification. Thereafter, we utilize the spatial hierarchical transformer structure to extract hierarchical spatial features from hyperspectral and LiDAR data, which are also crucial for classification. Furthermore, the cross attention (CA) feature fusion pattern could adaptively and dynamically fuse heterogeneous features from multi-modality data, and this contextual aware fusion mode further improves the collaborative classification performance. Comparative experiments and ablation studies are conducted on three benchmark hyperspectral and LiDAR datasets, and the DHViT model could yield an average overall classification accuracy of 99.58%, 99.55%, and 96.40% on three datasets, respectively, which sufficiently certify the effectiveness and superior performance of the proposed method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA