Your browser doesn't support javascript.
loading
Speech emotion recognition based on transfer learning from the FaceNet framework.
Liu, Shuhua; Zhang, Mengyu; Fang, Ming; Zhao, Jianwei; Hou, Kun; Hung, Chih-Cheng.
Afiliação
  • Liu S; Northeast Normal University, Changchun, Jilin Province 130117, China.
  • Zhang M; Northeast Normal University, Changchun, Jilin Province 130117, China.
  • Fang M; Northeast Normal University, Changchun, Jilin Province 130117, China.
  • Zhao J; Northeast Normal University, Changchun, Jilin Province 130117, China.
  • Hou K; Northeast Normal University, Changchun, Jilin Province 130117, China.
  • Hung CC; College of Computing and Software Engineering, Kennesaw State University, Marietta, Georgia 30060, USA.
J Acoust Soc Am ; 149(2): 1338, 2021 02.
Article em En | MEDLINE | ID: mdl-33639796
Speech plays an important role in human-computer emotional interaction. FaceNet used in face recognition achieves great success due to its excellent feature extraction. In this study, we adopt the FaceNet model and improve it for speech emotion recognition. To apply this model for our work, speech signals are divided into segments at a given time interval, and the signal segments are transformed into a discrete waveform diagram and spectrogram. Subsequently, the waveform and spectrogram are separately fed into FaceNet for end-to-end training. Our empirical study shows that the pretraining is effective on the spectrogram for FaceNet. Hence, we pretrain the network on the CASIA dataset and then fine-tune it on the IEMOCAP dataset with waveforms. It will derive the maximum transfer learning knowledge from the CASIA dataset due to its high accuracy. This high accuracy may be due to its clean signals. Our preliminary experimental results show an accuracy of 68.96% and 90% on the emotion benchmark datasets IEMOCAP and CASIA, respectively. The cross-training is then conducted on the dataset, and comprehensive experiments are performed. Experimental results indicate that the proposed approach outperforms state-of-the-art methods on the IEMOCAP dataset among single modal approaches.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fala / Redes Neurais de Computação Limite: Humans Idioma: En Revista: J Acoust Soc Am Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fala / Redes Neurais de Computação Limite: Humans Idioma: En Revista: J Acoust Soc Am Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China