Neural speech enhancement with unsupervised pre-training and mixture training.
Neural Netw
; 158: 216-227, 2023 Jan.
Article
in En
| MEDLINE
| ID: mdl-36463693
ABSTRACT
Supervised neural speech enhancement methods always require a large scale of paired noisy and clean speech data. Since collecting adequate paired data from real-world applications is infeasible, simulated data is always adopted in supervised learning methods. However, the mismatch between the simulated data and in-the-wild data always causes performance inconsistency when the system is deployed in real-world applications. Unsupervised speech enhancement methods are studied to address the mismatch problem by directly using the in-the-wild noisy data without access to the corresponding clean speech. Therefore, the simulated paired data is not necessary. However, the performance of the unsupervised speech enhancement method is not on par with the supervised learning method. To address the aforementioned problems, this work proposes an unsupervised pre-training and mixture training algorithm by leveraging the advantages of supervised and unsupervised learning methods. Specifically, the proposed speech enhancement approach employs large volumes of unpaired noisy and clean speech to conduct unsupervised pre-training. The noisy data and a small amount of simulated paired data are then used for mixture training to optimize the pre-trained model. Experimental results show that the proposed method achieves better performances than other state-of-the-art supervised and unsupervised learning methods.
Key words
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Main subject:
Speech
/
Algorithms
Language:
En
Journal:
Neural Netw
Journal subject:
NEUROLOGIA
Year:
2023
Document type:
Article