Your browser doesn't support javascript.
loading
End-to-End Automatic Pronunciation Error Detection Based on Improved Hybrid CTC/Attention Architecture.
Zhang, Long; Zhao, Ziping; Ma, Chunmei; Shan, Linlin; Sun, Huazhi; Jiang, Lifen; Deng, Shiwen; Gao, Chang.
Afiliación
  • Zhang L; College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China.
  • Zhao Z; College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China.
  • Ma C; College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China.
  • Shan L; College of Fine Arts and Design, Tianjin Normal University, Tianjin 300387, China.
  • Sun H; College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China.
  • Jiang L; College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China.
  • Deng S; School of Mathematical Sciences, Harbin Normal University, Harbin 150080, China.
  • Gao C; School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China.
Sensors (Basel) ; 20(7)2020 Mar 25.
Article en En | MEDLINE | ID: mdl-32218379
ABSTRACT
Advanced automatic pronunciation error detection (APED) algorithms are usually based on state-of-the-art automatic speech recognition (ASR) techniques. With the development of deep learning technology, end-to-end ASR technology has gradually matured and achieved positive practical results, which provides us with a new opportunity to update the APED algorithm. We first constructed an end-to-end ASR system based on the hybrid connectionist temporal classification and attention (CTC/attention) architecture. An adaptive parameter was used to enhance the complementarity of the connectionist temporal classification (CTC) model and the attention-based seq2seq model, further improving the performance of the ASR system. After this, the improved ASR system was used in the APED task of Mandarin, and good results were obtained. This new APED method makes force alignment and segmentation unnecessary, and it does not require multiple complex models, such as an acoustic model or a language model. It is convenient and straightforward, and will be a suitable general solution for L1-independent computer-assisted pronunciation training (CAPT). Furthermore, we find that find that in regards to accuracy metrics, our proposed system based on the improved hybrid CTC/attention architecture is close to the state-of-the-art ASR system based on the deep neural network-deep neural network (DNN-DNN) architecture, and has a stronger effect on the F-measure metrics, which are especially suitable for the requirements of the APED task.
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Tipo de estudio: Diagnostic_studies Idioma: En Revista: Sensors (Basel) Año: 2020 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Bases de datos: MEDLINE Tipo de estudio: Diagnostic_studies Idioma: En Revista: Sensors (Basel) Año: 2020 Tipo del documento: Article País de afiliación: China