Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(4)2023 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-36850805

RESUMO

Multimodal fusion approaches that combine data from dissimilar sensors can better exploit human-like reasoning and strategies for situational awareness. The performance of a six-layer convolutional neural network (CNN) and an 18-layer ResNet architecture are compared for a variety of fusion methods using synthetic aperture radar (SAR) and electro-optical (EO) imagery to classify military targets. The dataset used is the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset, using both original measured SAR data and synthetic EO data. We compare the classification performance of both networks using the data modalities individually, feature level fusion, decision level fusion, and using a novel fusion method based on the three RGB-input channels of a residual neural network (ResNet). In the proposed input channel fusion method, the SAR and the EO imagery are separately fed to each of the three input channels, while the third channel is fed a zero vector. It is found that the input channel fusion method using ResNet was able to consistently perform to a higher classification accuracy in every equivalent scenario.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA