Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ear Hear ; 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38414136

RESUMEN

OBJECTIVES: Self-assessment of perceived communication difficulty has been used in clinical and research practices for decades. Such questionnaires routinely assess the perceived ability of an individual to understand speech, particularly in background noise. Despite the emphasis on perceived performance in noise, speech recognition in routine audiologic practice is measured by word recognition in quiet (WRQ). Moreover, surprisingly little data exist that compare speech understanding in noise (SIN) abilities to perceived communication difficulty. Here, we address these issues by examining audiometric thresholds, WRQ scores, QuickSIN signal to noise ratio (SNR) loss, and perceived auditory disability as measured by the five questions on the Speech Spatial Questionnaire-12 (SSQ12) devoted to speech understanding (SSQ12-Speech5). DESIGN: We examined data from 1633 patients who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed the SSQ12 questionnaire, pure-tone audiometry, and speech assessment consisting of ear-specific WRQ, and ear-specific QuickSIN. Only individuals with hearing threshold asymmetries ≤10 dB HL in their high-frequency pure-tone average (HFPTA) were included. Our primary objectives were to (1) examine the relationship between audiometric variables and the SSQ12-Speech5 scores, (2) determine the amount of variance in the SSQ12-Speech5 scores which could be predicted from audiometric variables, and (3) predict which patients were likely to report greater perceived auditory disability according to the SSQ12-Speech5. RESULTS: Performance on the SSQ12-Speech5 indicated greater perceived auditory disability with more severe degrees of hearing loss and greater QuickSIN SNR loss. Degree of hearing loss and QuickSIN SNR loss were found to account for modest but significant variance in SSQ12-Speech5 scores after accounting for age. In contrast, WRQ scores did not significantly contribute to the predictive power of the model. Degree of hearing loss and QuickSIN SNR loss were also found to have moderate diagnostic accuracy for determining which patients were likely to report SSQ12-Speech5 scores indicating greater perceived auditory disability. CONCLUSIONS: Taken together, these data indicate that audiometric factors including degree of hearing loss (i.e., HFPTA) and QuickSIN SNR loss are predictive of SSQ12-Speech5 scores, though notable variance remains unaccounted for after considering these factors. HFPTA and QuickSIN SNR loss-but not WRQ scores-accounted for a significant amount of variance in SSQ12-Speech5 scores and were largely effective at predicting which patients are likely to report greater perceived auditory disability on the SSQ12-Speech5. This provides further evidence for the notion that speech-in-noise measures have greater clinical utility than WRQ in most instances as they relate more closely to measures of perceived auditory disability.

2.
Ear Hear ; 44(6): 1548-1561, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37703127

RESUMEN

OBJECTIVES: For decades, monosyllabic word-recognition in quiet (WRQ) has been the default test of speech recognition in routine audiologic assessment. The continued use of WRQ scores is noteworthy in part because difficulties understanding speech in noise (SIN) is perhaps the most common complaint of individuals with hearing loss. The easiest way to integrate SIN measures into routine clinical practice would be for SIN to replace WRQ assessment as the primary test of speech perception. To facilitate this goal, we predicted classifications of WRQ scores from the QuickSIN signal to noise ratio (SNR) loss and hearing thresholds. DESIGN: We examined data from 5808 patients who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed pure-tone audiometry, and speech assessment consisting of monaural WRQ, and monaural QuickSIN. We then performed multiple-logistic regression to determine whether classification of WRQ scores could be predicted from pure-tone thresholds and QuickSIN SNR losses. RESULTS: Many patients displayed significant challenges on the QuickSIN despite having excellent WRQ scores. Performance on both measures decreased with hearing loss. However, decrements in performance were observed with less hearing loss for the QuickSIN than for WRQ. Most important, we demonstrate that classification of good or excellent word-recognition scores in quiet can be predicted with high accuracy by the high-frequency pure-tone average and the QuickSIN SNR loss. CONCLUSIONS: Taken together, these data suggest that SIN measures provide more information than WRQ. More important, the predictive power of our model suggests that SIN can replace WRQ in most instances, by providing guidelines as to when performance in quiet is likely to be excellent and does not need to be measured. Making this subtle, but profound shift to clinical practice would enable routine audiometric testing to be more sensitive to patient concerns, and may benefit both clinicians and researchers.


Asunto(s)
Sordera , Percepción del Habla , Humanos , Habla , Ruido , Audición , Audiometría de Tonos Puros
3.
Ear Hear ; 44(6): 1540-1547, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37707393

RESUMEN

OBJECTIVES: Measures of speech-in-noise, such as the QuickSIN, are increasingly common tests of speech perception in audiologic practice. However, the effect of vestibular schwannoma (VS) on speech-in-noise abilities is unclear. Here, we compare the predictive ability of interaural QuickSIN asymmetry for detecting VS against other measures of audiologic asymmetry. METHODS: A retrospective review of patients in our institution who received QuickSIN testing in addition to a regular audiologic battery between September 2015 and February 2019 was conducted. Records for patients with radiographically confirmed, unilateral, pretreatment VSs were identified. The remaining records excluding conductive pathologies were used as controls. The predictive abilities of various measures of audiologic asymmetry to detect VS were statistically compared. RESULTS: Our search yielded 73 unique VS patients and 2423 controls. Receiver operating characteristic curve analysis showed that QuickSIN asymmetry was more sensitive and specific than pure-tone average asymmetry and word-recognition-in-quiet asymmetry for detecting VS. Multiple logistic regression analysis revealed that QuickSIN asymmetry was more predictive of VS (odds ratio [OR] = 1.23, 95% confidence interval [CI] [1.10, 1.38], p < 0.001) than pure-tone average asymmetry (OR = 1.04, 95% CI [1.00, 1.07], p = 0.025) and word-recognition-in-quiet asymmetry (OR = 1.03, 95% CI [0.99, 1.06], p = 0.064). CONCLUSION: Between-ear asymmetries in the QuickSIN appear to be more efficient than traditional measures of audiologic asymmetry for identifying patients with VS. These results suggest that speech-in noise testing could be integrated into clinical practice without hindering the ability to identify retrocochlear pathology.


Asunto(s)
Neuroma Acústico , Percepción del Habla , Humanos , Habla , Neuroma Acústico/diagnóstico , Ruido , Valores de Referencia , Estudios Retrospectivos
4.
J Speech Lang Hear Res ; 65(12): 4852-4865, 2022 12 12.
Artículo en Inglés | MEDLINE | ID: mdl-36472938

RESUMEN

PURPOSE: An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD: Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS: Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS: These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21644801.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Pérdida Auditiva , Percepción del Habla , Humanos , Inteligibilidad del Habla , Implantación Coclear/métodos
5.
J Acoust Soc Am ; 146(5): 3373, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31795696

RESUMEN

When hearing an ambiguous speech sound, listeners show a tendency to perceive it as a phoneme that would complete a real word, rather than completing a nonsense/fake word. For example, a sound that could be heard as either /b/ or /É¡/ is perceived as /b/ when followed by _ack but perceived as /É¡/ when followed by "_ap." Because the target sound is acoustically identical across both environments, this effect demonstrates the influence of top-down lexical processing in speech perception. Degradations in the auditory signal were hypothesized to render speech stimuli more ambiguous, and therefore promote increased lexical bias. Stimuli included three speech continua that varied by spectral cues of varying speeds, including stop formant transitions (fast), fricative spectra (medium), and vowel formants (slow). Stimuli were presented to listeners with cochlear implants (CIs), and also to listeners with normal hearing with clear spectral quality, or with varying amounts of spectral degradation using a noise vocoder. Results indicated an increased lexical bias effect with degraded speech and for CI listeners, for whom the effect size was related to segment duration. This method can probe an individual's reliance on top-down processing even at the level of simple lexical/phonetic perception.


Asunto(s)
Implantes Cocleares , Pérdida Auditiva/fisiopatología , Fonética , Percepción del Habla , Adulto , Sesgo , Señales (Psicología) , Femenino , Pérdida Auditiva/rehabilitación , Humanos , Masculino , Acústica del Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...