Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
Sensors (Basel)
; 21(19)2021 Sep 25.
Article
em En
| MEDLINE
| ID: mdl-34640730
Deep learning models, especially recurrent neural networks (RNNs), have been successfully applied to automatic modulation classification (AMC) problems recently. However, deep neural networks are usually overparameterized, i.e., most of the connections between neurons are redundant. The large model size hinders the deployment of deep neural networks in applications such as Internet-of-Things (IoT) networks. Therefore, reducing parameters without compromising the network performance via sparse learning is often desirable since it can alleviates the computational and storage burdens of deep learning models. In this paper, we propose a sparse learning algorithm that can directly train a sparsely connected neural network based on the statistics of weight magnitude and gradient momentum. We first used the MNIST and CIFAR10 datasets to demonstrate the effectiveness of this method. Subsequently, we applied it to RNNs with different pruning strategies on recurrent and non-recurrent connections for AMC problems. Experimental results demonstrated that the proposed method can effectively reduce the parameters of the neural networks while maintaining model performance. Moreover, we show that appropriate sparsity can further improve network generalization ability.
Palavras-chave
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Redes Neurais de Computação
/
Internet das Coisas
Idioma:
En
Ano de publicação:
2021
Tipo de documento:
Article