Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Voice ; 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39142922

RESUMO

OBJECTIVES: Sound pressure and exhaled flow have been identified as important factors associated with higher particle emissions. The aim of this study was to assess how different vocalizations affect the particle generation independently from other factors. DESIGN: Experimental study. METHODS: Thirty-three experienced singers repeated two different sentences in normal loudness and whispering. The first sentence consisted mainly of consonants like /k/ and /t/ as well as open vowels, while the second sentence also included the /s/ sound and contained primarily closed vowels. The particle emission was measured using condensation particle counter (CPC, 3775 TSI Inc.) and aerodynamic particle sizer (APS, 3321 TSI Inc.). The CPC measured particle number concentration for particles larger than 4 nm and mainly reflects the number of particles smaller than 0.5 µm since these particles dominate total number concentration. The APS measured particle size distribution and number concentration in the size range of 0.5-10 µm and data were divided into >1 µm and <1 µm particle size ranges. Generalized linear mixed-effects models were constructed to assess the factors affecting particle generation. RESULTS: Whispering produced more particles than speaking and sentence 1 produced more particles than sentence 2 while speaking. Sound pressure level had effect on particle production independently from vocalization. The effect of exhaled airflow was not statistically significant. CONCLUSIONS: Based on our results the type of vocalization has a significant effect on particle production independently from other factors such as sound pressure level.

2.
JASA Express Lett ; 4(6)2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38847582

RESUMO

The automatic classification of phonation types in singing voice is essential for tasks such as identification of singing style. In this study, it is proposed to use wavelet scattering network (WSN)-based features for classification of phonation types in singing voice. WSN, which has a close similarity with auditory physiological models, generates acoustic features that greatly characterize the information related to pitch, formants, and timbre. Hence, the WSN-based features can effectively capture the discriminative information across phonation types in singing voice. The experimental results show that the proposed WSN-based features improved phonation classification accuracy by at least 9% compared to state-of-the-art features.

3.
IEEE J Biomed Health Inform ; 28(8): 4951-4962, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38669173

RESUMO

Many acoustic features and machine learning models have been studied to build automatic detection systems to distinguish dysarthric speech from healthy speech. These systems can help to improve the reliability of diagnosis. However, speech recorded for diagnosis in real-life clinical conditions can differ from the training data of the detection system in terms of, for example, recording conditions, speaker identity, and language. These mismatches may lead to a reduction in detection performance in practical applications. In this study, we investigate the use of the wav2vec2 model as a feature extractor together with a support vector machine (SVM) classifier to build automatic detection systems for dysarthric speech. The performance of the wav2vec2 features is evaluated in two cross-database scenarios, language-dependent and language-independent, to study their generalizability to unseen speakers, recording conditions, and languages before and after fine-tuning the wav2vec2 model. The results revealed that the fine-tuned wav2vec2 features showed better generalization in both scenarios and gave an absolute accuracy improvement of 1.46%-8.65% compared to the non-fine-tuned wav2vec2 features.


Assuntos
Disartria , Máquina de Vetores de Suporte , Humanos , Disartria/fisiopatologia , Disartria/diagnóstico , Masculino , Feminino , Processamento de Sinais Assistido por Computador , Adulto , Adulto Jovem , Bases de Dados Factuais , Pessoa de Meia-Idade , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA