IDAF: Iterative Dual-Scale Attentional Fusion Network for Automatic Modulation Recognition.
Sensors (Basel)
; 23(19)2023 Sep 28.
Article
em En
| MEDLINE
| ID: mdl-37836964
Recently, deep learning models have been widely applied to modulation recognition, and they have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization. To complement the advantages of different modalities, we focus on the multi-modal fusion method. Therefore, we introduce an iterative dual-scale attentional fusion (iDAF) method to integrate multimodal data. Firstly, two feature maps with different receptive field sizes are constructed using local and global embedding layers. Secondly, the feature inputs are iterated into the iterative dual-channel attention module (iDCAM), where the two branches capture the details of high-level features and the global weights of each modal channel, respectively. The iDAF not only extracts the recognition characteristics of each of the specific domains, but also complements the strengths of different modalities to obtain a fruitful view. Our iDAF achieves a recognition accuracy of 93.5% at 10 dB and 0.6232 at full signal-to-noise ratio (SNR). The comparative experiments and ablation studies effectively demonstrate the effectiveness and superiority of the iDAF.
Texto completo:
1
Base de dados:
MEDLINE
Idioma:
En
Revista:
Sensors (Basel)
Ano de publicação:
2023
Tipo de documento:
Article
País de afiliação:
China