Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
Speech Prosody ; 2022: 120-124, 2022 May.
Article in English | MEDLINE | ID: mdl-36444200

ABSTRACT

Prosody of patients with neurodegenerative disease is often impaired. We investigated changes to two prosodic cues in patients: the pitch contour and the duration of prepausal words. We analyzed recordings of picture descriptions produced by patients with neurodegenerative conditions that included either cognitive (n=223), motor (n=68), or mixed cognitive and motor impairments (n=109), and by healthy controls (n=28; HC). A speech activity detector identified pauses. Words were aligned to the acoustic signal; pitch values were normalized in scale and duration. Analyses of pitch showed that the ending (90th-100th percentile) of prepausal words had a lower pitch in the mixed and motor groups than the cognitive group and HC. The pitch contour from the midpoint of words to the end showed a steep rising slope for HC, but patients showed a gentle rising or flat slope. This suggests that HC signaled the continuation of their description after the pause with rising contour; patients either failed to keep describing the picture due to cognitive impairment or could not raise pitch due to motor impairments. Prepausal words showed longer duration relative to non-prepausal words with no significant differences between the groups. This suggests that prepausal lengthening is preserved in patients.

2.
Article in English | MEDLINE | ID: mdl-33748328

ABSTRACT

Hypernasality is a common characteristic symptom across many motor-speech disorders. For voiced sounds, hypernasality introduces an additional resonance in the lower frequencies and, for unvoiced sounds, there is reduced articulatory precision due to air escaping through the nasal cavity. However, the acoustic manifestation of these symptoms is highly variable, making hypernasality estimation very challenging, both for human specialists and automated systems. Previous work in this area relies on either engineered features based on statistical signal processing or machine learning models trained on clinical ratings. Engineered features often fail to capture the complex acoustic patterns associated with hypernasality, whereas metrics based on machine learning are prone to overfitting to the small disease-specific speech datasets on which they are trained. Here we propose a new set of acoustic features that capture these complementary dimensions. The features are based on two acoustic models trained on a large corpus of healthy speech. The first acoustic model aims to measure nasal resonance from voiced sounds, whereas the second acoustic model aims to measure articulatory imprecision from unvoiced sounds. To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora. Our results show that the features generalize even when training on hypernasal speech from one disease and evaluating on hypernasal speech from another disease (e.g., training on Parkinson's disease, evaluation on Huntington's disease), and when training on neurologically disordered speech but evaluating on cleft palate speech.

SELECTION OF CITATIONS
SEARCH DETAIL