Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38875091

RESUMO

Multisource remote sensing data classification is a challenging research topic, and how to address the inherent heterogeneity between multimodal data while exploring their complementarity is crucial. Existing deep learning models usually directly adopt feature-level fusion designs, most of which, however, fail to overcome the impact of heterogeneity, limiting their performance. As such, a multimodal joint classification framework, called global clue-guided cross-memory quaternion transformer network (GCCQTNet), is proposed for multisource data i.e., hyperspectral image (HSI) and synthetic aperture radar (SAR)/light detection and ranging (LiDAR) classification. First, a three-branch structure is built to extract the local and global features, where an independent squeeze-expansion-like fusion (ISEF) structure is designed to update the local and global representations by considering the global information as an agent, suppressing the negative impact of multimodal heterogeneity layer by layer. A cross-memory quaternion transformer (CMQT) structure is further constructed to model the complex inner relationships between the intramodality and intermodality features to capture more discriminative fusion features that fully characterize multimodal complementarity. Finally, a cross-modality comparative learning (CMCL) structure is developed to impose the consistency constraint on global information learning, which, in conjunction with a classification head, is used to guide the end-to-end training of GCCQTNet. Extensive experiments on three public multisource remote sensing datasets illustrate the superiority of our GCCQTNet with regards to other state-of-the-art methods.

2.
Nano Lett ; 24(9): 2789-2797, 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38407030

RESUMO

Two-dimensional materials are expected to play an important role in next-generation electronics and optoelectronic devices. Recently, twisted bilayer graphene and transition metal dichalcogenides have attracted significant attention due to their unique physical properties and potential applications. In this study, we describe the use of optical microscopy to collect the color space of chemical vapor deposition (CVD) of molybdenum disulfide (MoS2) and the application of a semantic segmentation convolutional neural network (CNN) to accurately and rapidly identify thicknesses of MoS2 flakes. A second CNN model is trained to provide precise predictions on the twist angle of CVD-grown bilayer flakes. This model harnessed a data set comprising over 10,000 synthetic images, encompassing geometries spanning from hexagonal to triangular shapes. Subsequent validation of the deep learning predictions on twist angles was executed through the second harmonic generation and Raman spectroscopy. Our results introduce a scalable methodology for automated inspection of twisted atomically thin CVD-grown bilayers.

3.
IEEE Trans Neural Netw Learn Syst ; 33(2): 747-761, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33085622

RESUMO

The problem of effectively exploiting the information multiple data sources has become a relevant but challenging research topic in remote sensing. In this article, we propose a new approach to exploit the complementarity of two data sources: hyperspectral images (HSIs) and light detection and ranging (LiDAR) data. Specifically, we develop a new dual-channel spatial, spectral and multiscale attention convolutional long short-term memory neural network (called dual-channel A3 CLNN) for feature extraction and classification of multisource remote sensing data. Spatial, spectral, and multiscale attention mechanisms are first designed for HSI and LiDAR data in order to learn spectral- and spatial-enhanced feature representations and to represent multiscale information for different classes. In the designed fusion network, a novel composite attention learning mechanism (combined with a three-level fusion strategy) is used to fully integrate the features in these two data sources. Finally, inspired by the idea of transfer learning, a novel stepwise training strategy is designed to yield a final classification result. Our experimental results, conducted on several multisource remote sensing data sets, demonstrate that the newly proposed dual-channel A 3 CLNN exhibits better feature representation ability (leading to more competitive classification performance) than other state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA