Your browser doesn't support javascript.
loading
Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer.
Ullah, Rizwan; Asif, Muhammad; Shah, Wahab Ali; Anjam, Fakhar; Ullah, Ibrar; Khurshaid, Tahir; Wuttisittikulkij, Lunchakorn; Shah, Shashi; Ali, Syed Mansoor; Alibakhshikenari, Mohammad.
Afiliação
  • Ullah R; Wireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, Thailand.
  • Asif M; Department of Electrical Engineering, Main Campus, University of Science & Technology, Bannu 28100, Pakistan.
  • Shah WA; Department of Electrical Engineering, Namal University, Mianwali 42250, Pakistan.
  • Anjam F; Department of Electrical Engineering, Main Campus, University of Science & Technology, Bannu 28100, Pakistan.
  • Ullah I; Department of Electrical Engineering, Kohat Campus, University of Engineering and Technology Peshawar, Kohat 25000, Pakistan.
  • Khurshaid T; Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.
  • Wuttisittikulkij L; Wireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, Thailand.
  • Shah S; Wireless Communication Ecosystem Research Unit, Department of Electrical Engineering, Chulalongkorn University, Bangkok 10330, Thailand.
  • Ali SM; Department of Physics and Astronomy, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia.
  • Alibakhshikenari M; Department of Signal Theory and Communications, Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain.
Sensors (Basel) ; 23(13)2023 Jul 07.
Article em En | MEDLINE | ID: mdl-37448062
Speech emotion recognition (SER) is a challenging task in human-computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Redes Neurais de Computação Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Redes Neurais de Computação Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article