Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Voice ; 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38216386

RESUMO

OBJECTIVES: This study aimed to establish an artificial intelligence (AI) system to classify vertical level differences between vocal folds during vocalization and to evaluate the accuracy of the classification. METHODS: We designed models with different depths between the right and left vocal folds using an excised canine larynx. Video files for the data set were obtained using a high-speed camera system and a color complementary metal oxide semiconductor camera with global shutter. The data sets were divided into training, validation, and testing. We used 20,000 images for building the model and 8000 images for testing. To perform deep learning multiclass classification and to estimate the vertical level difference, we introduced DenseNet121-ConvLSTM. RESULTS: The model was trained several times using different numbers of epochs. We achieved the most optimal results at 100 epochs, and the batch size used during training was 16. The proposed DenseNet121-ConvLSTM model achieved classification accuracies of 99.5% and 88.0% for training and testing, respectively. After verification using an external data set, the overall accuracy, precision, recall, and f1-score were 90.8%, 91.6%, 90.9%, and 91.2%, respectively. CONCLUSIONS: The newly developed AI system may be an easy and accurate method for classifying superior and inferior vertical level differences between vocal folds. Thus, this AI system can be applied and may help in the assessment of vertical level differences in patients with unilateral vocal fold paralysis.

2.
Folia Phoniatr Logop ; 2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-37952516

RESUMO

INTRODUCTION: This study aimed to develop, validate, and analyze the reliability of the Korean version of the Voice Handicap Index-Throat (VHI-Tk). METHODS: This prospective study included 103 patients in the case group with voice problems (18 with functional dysphonia, 44 with mass in the larynx, 18 with neurological voice disorder, 23 with throat problems) and 27 in the control group without voice problems. All participants completed these questionnaires at their initial visit: the Korean version of the Voice Handicap Index (K-VHI), VHI-Tk, and the Korean version of the Voice Symptom Scale (K-VoiSS). Case group patients in the case group re-completed the VHI-Tk questionnaire to assess test-retest reliability. Finally, a one-way analysis of variance was implemented to assess differences in VHI-Tk scores among the four diagnosis types in the case group. RESULTS: The VHI-Tk scores in the case group were significantly higher than in the control group. The VHI-Tk was significantly correlated with the subscales of K-VHI and K-VoiSS. The VHI-Tk has significant test-retest reliability, and its internal consistency is good to excellent (Cronbach's alpha correlation coefficient range: 0.895-0.901). There was significant difference in the mean VHI-Tk scores according to the four diagnosis types (throat problems group > neurological voice disorder group). CONCLUSION: We validated the VHI-T questionnaire to measure self-perceived voice and throat problems among Koreans. A large sample size and various diagnosis types are required in future studies to fully validate the VHI-T for use in multiple cultures.

3.
J Voice ; 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36464574

RESUMO

OBJECTIVES: This study aimed to investigate the reference values for cepstral peak prominence (CPP) and smoothed CPP (CPPS) measured using Praat in Korean speakers with the normal, healthy and pathological voice. METHODS: A total of 4,524 Korean participants with vocally healthy (n = 410) and dysphonic voices (n = 4,114) participated in this study. The speech task consisted of a sustained vowel /a/ and a sentence reading the Korean passage "Walk". CPP and CPPS values were quickly and automatically measured in sustained vowel and continuous speech tasks using Praat script. Furthermore, three veteran speech language pathologists (SLPs) scored the severity of dysphonia using the GRBAS scale (grade, roughness, breathiness, asthenia, strain) and Consensus Auditory Perceptual Evaluation of Voice (CAPE-V). RESULTS: Three SLPs showed high inter- and intra-rater reliabilities (IRR) in auditory-perceptual (A-P) evaluation. Significant differences were confirmed in CPP and CPPS between the normally healthy and pathological voice groups for both voice tasks (P < 0.01). The measured values of CPP and CPPS varied depending on the laryngeal pathology. In the receiver operating characteristic (ROC) curve analysis, the CPP_Vowel (CPP_V), CPPS_V, CPP_Sentence (CPP_S), and CPPS_S cut-off values were <21.5, <12.0, <19.7, and <10.1, respectively. Through ROC curve analysis, it was confirmed that CPP and CPPS had excellent diagnostic accuracy in distinguishing disordered voice (area under the ROC: 0.951-0.966). CONCLUSION: We investigated the reference values for CPP and CPPS measured with Praat for Korean speakers and confirmed that cepstral analysis is a promising tool for differentiating pathological voice.

4.
J Voice ; 2022 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-36075802

RESUMO

OBJECTIVES: The purpose of study is to improve the classification accuracy by comparing the results obtained by applying decision tree ensemble learning, which is one of the methods to increase the classification accuracy for a relatively small dataset, with the results obtained by the convolutional neural network (CNN) algorithm for the diagnosis of glottal cancer. METHODS: Pusan National University Hospital (PNUH) dataset were used to establish classifiers and Pusan National University Yangsan Hospital (PNUYH) dataset were used to verify the classifier's performance in the generated model. For the diagnosis of glottic cancer, deep learning-based CNN models were established and classified using laryngeal image and voice data. Classification accuracy was obtained by performing decision tree ensemble learning using probability through CNN classification algorithm. In this process, the classification and regression tree (CART) method was used. Then, we compared the classification accuracy of decision tree ensemble learning with CNN individual classifiers by fusing the laryngeal image with the voice decision tree classifier. RESULTS: We obtained classification accuracy of 81.03 % and 99.18 % in the established laryngeal image and voice classification models using PNUH training dataset, respectively. However, the classification accuracy of CNN classifiers decreased to 73.88 % in voice and 68.92 % in laryngeal image when using an external dataset of PNUYH. To solve this problem, decision tree ensemble learning of laryngeal image and voice was used, and the classification accuracy was improved by integrating data of laryngeal image and voice of the same person. The classification accuracy was 87.88 % and 89.06 % for the individualized laryngeal image and voice decision tree model respectively, and the fusion of the laryngeal image and voice decision tree results represented a classification accuracy of 95.31 %. CONCLUSION: The results of our study suggest that decision tree ensemble learning aimed at training multiple classifiers is useful to obtain an increased classification accuracy despite a small dataset. Although a large data amount is essential for AI analysis, when an integrated approach is taken by combining various input data high diagnostic classification accuracy can be expected.

5.
Rapid Commun Mass Spectrom ; 31(12): 1023-1030, 2017 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-28401729

RESUMO

RATIONALE: The theoretical enthalpy calculated from the overall protonation reaction (electron transfer plus hydrogen transfer) in positive-mode (+) atmospheric-pressure photoionization (APPI) was compared with experimental results for 49 aromatic compounds. A linear relationship was observed between the calculated ΔH and the relative abundance of the protonated peak. The parameter gives reasonable predictions for all the aromatic hydrocarbon compounds used in this study. METHODS: A parameter is devised by combining experimental MS data and high-level theoretical calculations. A (+) APPI Q Exactive Orbitrap mass spectrometer was used to obtain MS data for each solution. B3LYP exchange-correlation functions with the standard 6-311+G(df,2p) basis set was used to perform density functional theory (DFT) calculations. RESULTS: All the molecules with ΔH <0 kcal/mol for the overall protonation reaction with toluene clusters produced protonated ions, regardless of the desolvation temperature. For molecules with ΔH >0, molecular ions were more abundant at typical APPI desolvation temperatures (300°C), while the protonated ions became comparable or dominant at higher temperatures (400°C). The toluene cluster size was an important factor when predicting the ionization behavior of aromatic hydrocarbon ions in (+) APPI. CONCLUSIONS: The data used in this study clearly show that the theoretically calculated reaction enthalpy (ΔH) of protonation with toluene dimers can be used to predict the protonation behavior of aromatic compounds. When compounds have a negative ΔH value, the types of ions generated for aromatic compounds could be very well predicted based on the ΔH value. The ΔH can explain overall protonation behavior of compounds with ΔH values >0. Copyright © 2017 John Wiley & Sons, Ltd.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA