Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Biomed Eng Online ; 23(1): 15, 2024 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-38311731

RESUMO

Automatic speech assessments have the potential to dramatically improve ALS clinical practice and facilitate patient stratification for ALS clinical trials. Acoustic speech analysis has demonstrated the ability to capture a variety of relevant speech motor impairments, but implementation has been hindered by both the nature of lab-based assessments (requiring travel and time for patients) and also by the opacity of some acoustic feature analysis methods. These challenges and others have obscured the ability to distinguish different ALS disease stages/severities. Validation of automated acoustic analysis tools could enable detection of early signs of ALS, and these tools could be deployed to screen and monitor patients without requiring clinic visits. Here, we sought to determine whether acoustic features gathered using an automated assessment app could detect ALS as well as different levels of speech impairment severity resulting from ALS. Speech samples (readings of a standardized, 99-word passage) from 119 ALS patients with varying degrees of disease severity as well as 22 neurologically healthy participants were analyzed, and 53 acoustic features were extracted. Patients were stratified into early and late stages of disease (ALS-early/ALS-E and ALS-late/ALS-L) based on the ALS Functional Ratings Scale-Revised bulbar score (FRS-bulb) (median [interquartile range] of FRS-bulbar scores: 11[3]). The data were analyzed using a sparse Bayesian logistic regression classifier. It was determined that the current relatively small set of acoustic features could distinguish between ALS and controls well (area under receiver-operating characteristic curve/AUROC = 0.85), that the ALS-E patients could be separated well from control participants (AUROC = 0.78), and that ALS-E and ALS-L patients could be reasonably separated (AUROC = 0.70). These results highlight the potential for automated acoustic analyses to detect and stratify ALS.


Assuntos
Esclerose Lateral Amiotrófica , Humanos , Esclerose Lateral Amiotrófica/diagnóstico , Teorema de Bayes , Fala , Distúrbios da Fala/diagnóstico , Curva ROC
2.
IEEE J Transl Eng Health Med ; 12: 382-389, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38606392

RESUMO

Acoustic features extracted from speech can help with the diagnosis of neurological diseases and monitoring of symptoms over time. Temporal segmentation of audio signals into individual words is an important pre-processing step needed prior to extracting acoustic features. Machine learning techniques could be used to automate speech segmentation via automatic speech recognition (ASR) and sequence to sequence alignment. While state-of-the-art ASR models achieve good performance on healthy speech, their performance significantly drops when evaluated on dysarthric speech. Fine-tuning ASR models on impaired speech can improve performance in dysarthric individuals, but it requires representative clinical data, which is difficult to collect and may raise privacy concerns. This study explores the feasibility of using two augmentation methods to increase ASR performance on dysarthric speech: 1) healthy individuals varying their speaking rate and loudness (as is often used in assessments of pathological speech); 2) synthetic speech with variations in speaking rate and accent (to ensure more diverse vocal representations and fairness). Experimental evaluations showed that fine-tuning a pre-trained ASR model with data from these two sources outperformed a model fine-tuned only on real clinical data and matched the performance of a model fine-tuned on the combination of real clinical data and synthetic speech. When evaluated on held-out acoustic data from 24 individuals with various neurological diseases, the best performing model achieved an average word error rate of 5.7% and a mean correct count accuracy of 94.4%. In segmenting the data into individual words, a mean intersection-over-union of 89.2% was obtained against manual parsing (ground truth). It can be concluded that emulated and synthetic augmentations can significantly reduce the need for real clinical data of dysarthric speech when fine-tuning ASR models and, in turn, for speech segmentation.


Assuntos
Percepção da Fala , Fala , Humanos , Interface para o Reconhecimento da Fala , Disartria/diagnóstico , Distúrbios da Fala
3.
Digit Health ; 9: 20552076231219102, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38144173

RESUMO

Background and objective: Amyotrophic lateral sclerosis (ALS) frequently causes speech impairments, which can be valuable early indicators of decline. Automated acoustic assessment of speech in ALS is attractive, and there is a pressing need to validate such tools in line with best practices, including analytical and clinical validation. We hypothesized that data analysis using a novel speech assessment pipeline would correspond strongly to analyses performed using lab-standard practices and that acoustic features from the novel pipeline would correspond to clinical outcomes of interest in ALS. Methods: We analyzed data from three standard speech assessment tasks (i.e., vowel phonation, passage reading, and diadochokinesis) in 122 ALS patients. Data were analyzed automatically using a pipeline developed by Winterlight Labs, which yielded 53 acoustic features. First, for analytical validation, data were analyzed using a lab-standard analysis pipeline for comparison. This was followed by univariate analysis (Spearman correlations between individual features in Winterlight and in-lab datasets) and multivariate analysis (sparse canonical correlation analysis (SCCA)). Subsequently, clinical validation was performed. This included univariate analysis (Spearman correlation between automated acoustic features and clinical measures) and multivariate analysis (interpretable autoencoder-based dimensionality reduction). Results: Analytical validity was demonstrated by substantial univariate correlations (Spearman's ρ > 0.70) between corresponding pairs of features from automated and lab-based datasets, as well as interpretable SCCA feature groups. Clinical validity was supported by strong univariate correlations between automated features and clinical measures (Spearman's ρ > 0.70), as well as associations between multivariate outputs and clinical measures. Conclusion: This novel, automated speech assessment feature set demonstrates substantial promise as a valid tool for analyzing impaired speech in ALS patients and for the further development of these technologies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA