Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
J Speech Lang Hear Res ; 67(5): 1602-1623, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38569080

RESUMO

PURPOSE: The purpose of this study was to explore potential differences in suprathreshold auditory function among native and nonnative speakers of English as a function of age. METHOD: Retrospective analyses were performed on three large data sets containing suprathreshold auditory tests completed by 5,572 participants who were self-identified native and nonnative speakers of English between the ages of 18-65 years, including a binaural tone detection test, a digit identification test, and a sentence recognition test. RESULTS: The analyses show a significant interaction between increasing age and participant group on tests involving speech-based stimuli (digit strings, sentences) but not on the binaural tone detection test. For both speech tests, differences in speech recognition emerged between groups during early adulthood, and increasing age had a more negative impact on word recognition for nonnative compared to native participants. Age-related declines in performance were 2.9 times faster for digit strings and 3.3 times faster for sentences for nonnative participants compared to native participants. CONCLUSIONS: This set of analyses extends the existing literature by examining interactions between aging and self-identified native English speaker status in several auditory domains in a cohort of adults spanning young adulthood through middle age. The finding that older nonnative English speakers in this age cohort may have greater-than-expected deficits on speech-in-noise perception may have clinical implications on how these individuals should be diagnosed and treated for hearing difficulties.


Assuntos
Ruído , Percepção da Fala , Humanos , Adulto , Pessoa de Meia-Idade , Adulto Jovem , Percepção da Fala/fisiologia , Idoso , Adolescente , Masculino , Feminino , Estudos Retrospectivos , Envelhecimento/psicologia , Envelhecimento/fisiologia , Fatores Etários , Idioma , Limiar Auditivo/fisiologia
2.
Ear Hear ; 45(1): 130-141, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37599415

RESUMO

OBJECTIVES: Estimated prevalence of functional hearing and communication deficits (FHCDs), characterized by abnormally low speech recognition and binaural tone detection in noise or an abnormally high degree of self-perceived hearing difficulties, dramatically increases in active-duty service members (SMs) who have hearing thresholds slightly above the normal range and self-report to have been close to an explosive blast. Knowing the exact nature of the underlying auditory-processing deficits that contribute to FHCD would not only provide a better characterization of the effects of blast exposure on the human auditory system, but also allow clinicians to prescribe appropriate therapies to treat or manage patient complaints. DESIGN: Two groups of SMs were initially recruited: (1) a control group (N = 78) with auditory thresholds ≤20 dB HL between 250 and 8000 Hz, no history of blast exposure, and who passed a short FHCD screener, and (2) a group of blast-exposed SMs (N = 26) with normal to near-normal auditory thresholds between 250 and 4000 Hz, and who failed the FHCD screener (cutoffs based on the study by Grant et al.). The two groups were then compared on a variety of audiometric, behavioral, cognitive, and electrophysiological measures. These tests were selected to characterize various aspects of auditory system processing from the cochlear to the cortex. A third, smaller group of blast-exposed SMs who performed within normal limits on the FHCD screener were also recruited (N = 11). This third subject group was unplanned at the onset of the study and was added to evaluate the effects of blast exposure on hearing and communication regardless of performance on the FHCD screener. RESULTS: SMs in the blast-exposed group with FHCD performed significantly worse than control participants on several metrics that measured peripheral and mostly subcortical auditory processing. Cognitive processing was mostly unaffected by blast exposure with the exception of cognitive tests of language-processing speed and working memory. Blast-exposed SMs without FHCD performed similarly to the control group on tests of peripheral and brainstem processing, but performed similarly to blast-exposed SMs with FHCD on measures of cognitive processing. Measures derived from EEG recordings of the frequency-following response revealed that blast-exposed SMs who exhibited FHCD demonstrated increased spontaneous neural activity, reduced amplitude of the envelope-following response, poor internal signal to noise ratio, reduced response stability, and an absent or delayed onset response, compared with the other two participant groups. CONCLUSIONS: Degradation in the neural encoding of acoustic stimuli is likely a major contributing factor leading to FHCD in blast-exposed SMs with normal to near-normal audiometric thresholds. Blast-exposed SMs, regardless of their performance on the FHCD screener, exhibited a deficit in language-processing speed and working memory, which could lead to difficulties in decoding rapid speech and in understanding speech in challenging speech communication settings. Further tests are needed to align these findings with clinical treatment protocols being used for patients with suspected auditory-processing disorders.


Assuntos
Perda Auditiva , Percepção da Fala , Humanos , Audição , Percepção Auditiva/fisiologia , Testes Auditivos , Limiar Auditivo
3.
Trends Hear ; 27: 23312165231156673, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36794551

RESUMO

Closed-set consonant identification, measured using nonsense syllables, has been commonly used to investigate the encoding of speech cues in the human auditory system. Such tasks also evaluate the robustness of speech cues to masking from background noise and their impact on auditory-visual speech integration. However, extending the results of these studies to everyday speech communication has been a major challenge due to acoustic, phonological, lexical, contextual, and visual speech cue differences between consonants in isolated syllables and in conversational speech. In an attempt to isolate and address some of these differences, recognition of consonants spoken in multisyllabic nonsense phrases (e.g., aBaSHaGa spoken as /ɑbɑʃɑɡɑ/) produced at an approximately conversational syllabic rate was measured and compared with consonant recognition using Vowel-Consonant-Vowel bisyllables spoken in isolation. After accounting for differences in stimulus audibility using the Speech Intelligibility Index, consonants spoken in sequence at a conversational syllabic rate were found to be more difficult to recognize than those produced in isolated bisyllables. Specifically, place- and manner-of-articulation information was transmitted better in isolated nonsense syllables than for multisyllabic phrases. The contribution of visual speech cues to place-of-articulation information was also lower for consonants spoken in sequence at a conversational syllabic rate. These data imply that auditory-visual benefit based on models of feature complementarity from isolated syllable productions may over-estimate real-world benefit of integrating auditory and visual speech cues.


Assuntos
Percepção da Fala , Humanos , Ruído , Inteligibilidade da Fala , Acústica , Sinais (Psicologia) , Fonética
4.
J Acoust Soc Am ; 151(6): 3866, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35778214

RESUMO

Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.


Assuntos
Testes Auditivos , Qualidade de Vida , Limiar Auditivo , Audição , Ruído/efeitos adversos
5.
Ear Hear ; 42(6): 1615-1626, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34108398

RESUMO

OBJECTIVES: Over the past decade, U.S. Department of Defense and Veterans Affairs audiologists have reported large numbers of relatively young adult patients who have normal to near-normal audiometric thresholds but who report difficulty understanding speech in noisy environments. Many of these service members also reported having experienced exposure to explosive blasts as part of their military service. Recent studies suggest that some blast-exposed patients with normal to near-normal-hearing thresholds not only have an awareness of increased hearing difficulties, but also poor performance on various auditory tasks (sound source localization, speech recognition in noise, binaural integration, gap detection in noise, etc.). The purpose of this study was to determine the prevalence of functional hearing and communication deficits (FHCD) among healthy Active-Duty service men and women with normal to near-normal audiometric thresholds. DESIGN: To estimate the prevalence of such FHCD in the overall military population, performance of roughly 3400 Active-Duty service members with hearing thresholds mostly within the normal range were measured on 4 hearing tests and a brief 6-question survey to assess FHCD. Subjects were subdivided into 6 groups depending on the severity of the blast exposure (3 levels: none, far away, or close enough to feel heat or pressure) and hearing thresholds (2 levels: audiometric thresholds of 20 dB HL or better, slight elevation in 1 or more thresholds between 500 and 4000 Hz in either ear). RESULTS: While the probability of having hearing difficulty was low (≈4.2%) for the overall population tested, that probability increased by 2 to 3 times if the service member was blast-exposed from a close distance or had slightly elevated hearing thresholds (>20 dB HL). Service members having both blast exposure and mildly elevated hearing thresholds exhibited up to 4 times higher risk for performing abnormally on auditory tasks and more than 5 times higher risk for reporting abnormally low ratings on the subjective questionnaire, compared with service members with no history of blast exposure and audiometric thresholds ≤20 dB HL. Blast-exposed listeners were roughly 2.5 times more likely to experience subjective or objective hearing deficits than those with no-blast history. CONCLUSIONS: These elevated rates of abnormal performance suggest that roughly 33.6% of Active-Duty service members (or approximately 423,000) with normal to near-normal-hearing thresholds (i.e., H1 profile) are at some risk for FHCD, and about 5.7% (approximately 72,000) are at high risk, but are currently untested and undetected within the current fitness-for-duty standards. Service members identified as "at risk" for FHCD according to the metrics used in the present study, in spite of their excellent hearing thresholds, require further testing to determine whether they have sustained damage to peripheral and early-stage auditory processing (bottom-up processing), damage to cognitive processes for speech (top-down processing), or both. Understanding the extent of damage due to noise and blast exposures and the balance between bottom-up processing deficits and top-down deficits will likely lead to better therapeutic strategies.


Assuntos
Perda Auditiva , Percepção da Fala , Limiar Auditivo , Feminino , Audição , Testes Auditivos , Humanos , Masculino , Prevalência , Adulto Jovem
6.
J Acoust Soc Am ; 147(5): 3712, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32486805

RESUMO

The relative importance of individual frequency regions for speech intelligibility has been firmly established for broadband auditory-only (AO) conditions. Yet, speech communication often takes place face-to-face. This study tested the hypothesis that under auditory-visual (AV) conditions, where visual information is redundant with high-frequency auditory cues, lower frequency regions will increase in relative importance compared to AO conditions. Frequency band-importance functions for consonants were measured for eight hearing-impaired and four normal-hearing listeners. Speech was filtered into four 1/3-octave bands each separated by an octave to minimize energetic masking. On each trial, the signal-to-noise ratio (SNR) in each band was selected randomly from a 10-dB range. AO and AV band-importance functions were estimated using three logistic-regression analyses: a primary model relating performance to the four independent SNRs; a control model that also included band-interaction terms; and a different set of four control models, each examining one band at a time. For both listener groups, the relative importance of the low-frequency bands increased under AV conditions, consistent with earlier studies using isolated speech bands. All three analyses showed similar results, indicating the absence of cross-band interactions. These results suggest that accurate prediction of AV speech intelligibility may require different frequency-importance functions than for AO conditions.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Limiar Auditivo , Sinais (Psicologia) , Audição , Reconhecimento Psicológico
7.
Ear Hear ; 41(4): 825-837, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31569118

RESUMO

OBJECTIVES: The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. DESIGN: Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. RESULTS: The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. CONCLUSIONS: Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear.


Assuntos
Percepção da Fala , Idoso , Surdez , Perda Auditiva Neurossensorial , Humanos , Memória de Curto Prazo , Rememoração Mental , Pessoa de Meia-Idade
8.
J Acoust Soc Am ; 146(4): EL381, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671963

RESUMO

Effects of temporal distortions on consonant perception were measured using locally time-reversed nonsense syllables. Consonant recognition was measured in both audio and audio-visual modalities for assessing whether the addition of visual speech cues can recover consonant errors caused by time reversing. The degradation in consonant recognition depended highly on the manner of articulation, with sibilant fricatives, affricates, and nasals showing the least degradation. Because consonant errors induced by time reversing were primarily in voicing and place-of-articulation (mostly limited to stop-plosives and non-sibilant fricatives), undistorted visual speech cues could resolve only about half the errors (i.e., only place-of-articulation errors).


Assuntos
Acústica da Fala , Percepção da Fala , Percepção Visual , Adulto , Audiometria da Fala , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Reconhecimento Psicológico , Processamento de Sinais Assistido por Computador , Espectrografia do Som
9.
Ear Hear ; 40(2): 426-436, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30134353

RESUMO

OBJECTIVE: The clinical evaluation of hearing loss, using a pure-tone audiogram, is not adequate to assess the functional hearing capabilities (or handicap) of a patient, especially the speech-in-noise communication difficulties. The primary objective of this study was to measure the effect of elevated hearing thresholds on the recognition performance in various functional speech-in-noise tests that cover acoustic scenes of different complexities and to identify the subset of tests that (a) were sensitive to individual differences in hearing thresholds and (b) provide complementary information to the audiogram. A secondary goal was to compare the performance on this test battery with the self-assessed performance level of functional hearing abilities. DESIGN: In this study, speech-in-noise performance of normal-hearing listeners and listeners with hearing loss (audiometric configuration ranging from near-normal hearing to moderate-severe hearing loss) was measured on a battery of 12 different tests designed to evaluate speech recognition in a variety of speech and masker conditions, and listening tasks. The listening conditions were designed to measure the ability to localize and monitor multiple speakers or to take advantage of masker modulation, spatial separation between the target and the masker, and a restricted vocabulary. RESULTS: Listeners with hearing loss had significantly worse performance than the normal-hearing control group when speech was presented in the presence of a multitalker babble or in the presence of a single competing talker. In particular, the ability to take advantage of modulation benefit and spatial release from masking was significantly affected even with a mild audiometric loss. Elevated thresholds did not have a significant effect on the performance in the spatial awareness task. A composite score of all 12 tests was considered as a global metric of the overall speech-in-noise performance. Perceived hearing difficulties of subjects were better correlated with the composite score than with the performance on a standardized clinical speech-in-noise test. Regression analysis showed that scores from a subset of these tests, which could potentially take less than 10 min to administer, when combined with the better-ear pure-tone average and the subject's age, accounted for as much as 93.2% of the variance in the composite score. CONCLUSIONS: A test that measures speech recognition in the presence of a spatially separated competing talker would be useful in measuring suprathreshold speech-in-noise deficits that cannot be readily predicted from standard audiometric evaluation. Including such a test can likely reduce the gap between patient complaints and their clinical evaluation.


Assuntos
Atenção , Perda Auditiva/fisiopatologia , Ruído , Comportamento Espacial , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Feminino , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Razão Sinal-Ruído , Teste do Limiar de Recepção da Fala , Adulto Jovem
10.
Ear Hear ; 39(3): 449-456, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29570117

RESUMO

OBJECTIVE: To evaluate the speech-in-noise performance of listeners with different levels of hearing loss in a variety of complex listening environments. DESIGN: The quick speech-in-noise (QuickSIN)-based test battery was used to measure the speech recognition performance of listeners with different levels of hearing loss. Subjective estimates of speech reception thresholds (SRTs) corresponding to 100% and 0% speech intelligibility, respectively, were obtained using a method of adjustment before objective measurement of the actual SRT corresponding to 50% speech intelligibility in every listening condition. RESULTS: Of the seven alternative listening conditions, two conditions, one involving time-compressed, reverberant speech (TC+Rev), and the other (N0Sπ) having in-phase noise masker (N0) and out-of-phase target (Sπ), were found to be substantially more sensitive to the effect of hearing loss than the standard QuickSIN test. The performance in these two conditions also correlated with self-reported difficulties in attention/concentration during speech communication and in localizing the sound source, respectively. Hearing thresholds could account for about 50% or less variance in SRTs in any listening condition. Subjectively estimated SRTs (SRTs corresponding to 0% and 100% speech intelligibility) were highly correlated with the objective SRT measurements (SRT corresponding to 50% speech intelligibility). CONCLUSIONS: A test battery that includes the TC+Rev and the N0Sπ conditions would be useful in identifying individuals with hearing loss with speech-in-noise deficits in everyday communication.


Assuntos
Perda Auditiva Neurossensorial/diagnóstico , Testes Auditivos/métodos , Percepção da Fala , Estimulação Acústica , Adulto , Audiometria , Limiar Auditivo , Estudos de Avaliação como Assunto , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo , Adulto Jovem
11.
Hear Res ; 349: 90-97, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28111321

RESUMO

Since 1992, the Speech Recognition in Noise Test, or SPRINT, has been the standard speech-in-noise test for assessing auditory fitness-for-duty of US Army Soldiers with hearing loss. The original SPRINT test consisted of 200 monosyllabic words presented at a Signal-to-Noise Ratio (SNR) of +9 dB in the presence of a six-talker babble noise. Normative data for the test was collected on 319 hearing impaired Soldiers, and a procedure for making recommendations about the disposition of military personnel on the basis of their SPRINT score and their years of experience was developed and implemented as part of US Army policy. In 2013, a new 100-word version of the test was developed that eliminated words that were either too easy or too hard to make meaningful distinctions among hearing impaired listeners. This paper describes the development of the original 200-word SPRINT test, along with a description of the procedure used to reduce the 200-word test to 100 words and the results of a validation study conducted to evaluate how well the shortened 100-word test is able to capture the results from the full 200-word version of the SPRINT.


Assuntos
Perda Auditiva Provocada por Ruído/diagnóstico , Medicina Militar , Militares/psicologia , Ruído Ocupacional/efeitos adversos , Doenças Profissionais/diagnóstico , Exposição Ocupacional/efeitos adversos , Mascaramento Perceptivo , Percepção da Fala , Teste do Limiar de Recepção da Fala/métodos , Estimulação Acústica , Limiar Auditivo , Audição , Perda Auditiva Provocada por Ruído/etiologia , Perda Auditiva Provocada por Ruído/fisiopatologia , Perda Auditiva Provocada por Ruído/psicologia , Humanos , Doenças Profissionais/etiologia , Doenças Profissionais/fisiopatologia , Doenças Profissionais/psicologia , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Veteranos/psicologia , Avaliação da Capacidade de Trabalho
12.
J Acoust Soc Am ; 136(2): 859-66, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25096119

RESUMO

This study compared modulation benefit for phoneme recognition obtained by normal-hearing (NH) and aided hearing-impaired (HI) listeners. Consonant and vowel recognition scores were measured using nonsense syllables in the presence of a steady-state noise and four vocoded speech maskers. Vocoded maskers were generated by modulating the steady-state noise, in either one or six frequency channels, with the speech envelope extracted from the speech of either a single talker or a four-talker babble. Aided HI listeners obtained lower consonant recognition scores than NH listeners in all masker conditions. Vowel recognition scores for aided HI listeners were comparable to NH scores, except in the six-channel vocoded masker conditions where they were relatively lower. Analysis using the extended speech intelligibility index developed by Rhebergen, Versfeld, and Dreschler [(2006). J. Acoust. Soc. Am. 120(6), 3988-3997] suggested that the signal-to-noise ratio deficit observed in aided HI listeners was largely due to uncompensated audibility loss. There was no significant difference between modulation masking release obtained by NH and aided HI listeners for both consonant and vowel recognition.


Assuntos
Auxiliares de Audição , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Fonética , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Idoso , Audiometria da Fala , Limiar Auditivo , Estudos de Casos e Controles , Compreensão , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Espectrografia do Som , Inteligibilidade da Fala
15.
J Am Acad Audiol ; 24(4): 258-73; quiz 337-8, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636208

RESUMO

BACKGROUND: Traditional audiometric measures, such as pure-tone thresholds or unaided word-recognition in quiet, appear to be of marginal use in predicting speech understanding by hearing-impaired (HI) individuals in background noise with or without amplification. Suprathreshold measures of auditory function (tolerance of noise, temporal and frequency resolution) appear to contribute more to success with amplification and may describe more effectively the distortion component of hearing. However, these measures are not typically measured clinically. When combined with measures of audibility, suprathreshold measures of auditory distortion may provide a much more complete understanding of speech deficits in noise by HI individuals. PURPOSE: The primary goal of this study was to investigate the relationship among measures of speech recognition in noise, frequency selectivity, temporal acuity, modulation masking release, and informational masking in adult and elderly patients with sensorineural hearing loss to determine whether peripheral distortion for suprathreshold sounds contributes to the varied outcomes experienced by patients with sensorineural hearing loss listening to speech in noise. RESEARCH DESIGN: A correlational study. STUDY SAMPLE: Twenty-seven patients with sensorineural hearing loss and four adults with normal hearing were enrolled in the study. DATA COLLECTION AND ANALYSIS: The data were collected in a sound attenuated test booth. For speech testing, subjects' verbal responses were scored by the experimenter and entered into a custom computer program. For frequency selectivity and temporal acuity measures, subject responses were recorded via a touch screen. Simple correlation, step-wise multiple linear regression analyses and a repeated analysis of variance were performed. RESULTS: Results showed that the signal-to-noise ratio (SNR) loss could only be partially predicted by a listener's thresholds or audibility measures such as the Speech Intelligibility Index (SII). Correlations between SII and SNR loss were higher using the Hearing-in-Noise Test (HINT) than the Quick Speech-in-Noise test (QSIN) with the SII accounting for 71% of the variance in SNR loss for the HINT but only 49% for the QSIN. However, listener age and the addition of suprathreshold measures improved the prediction of SNR loss using the QSIN, accounting for nearly 71% of the variance. CONCLUSIONS: Two standard clinical speech-in-noise tests, QSIN and HINT, were used in this study to obtain a measure of SNR loss. When administered clinically, the QSIN appears to be less redundant with hearing thresholds than the HINT and is a better indicator of a patient's suprathreshold deficit and its impact on understanding speech in noise. Additional factors related to aging, spectral resolution, and, to a lesser extent, temporal resolution improved the ability to predict SNR loss measured with the QSIN. For the HINT, a listener's audibility and age were the only two significant factors. For both QSIN and HINT, roughly 25-30% of the variance in individual differences in SNR loss (i.e., the dB difference in SNR between an individual HI listener and a control group of NH listeners at a specified performance level, usually 50% word or sentence recognition) remained unexplained, suggesting the need for additional measures of suprathreshold acuity (e.g., sensitivity to temporal fine structure) or cognitive function (e.g., memory and attention) to further improve the ability to understand individual variability in SNR loss.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/diagnóstico , Perda Auditiva Neurossensorial/fisiopatologia , Distorção da Percepção/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Educação Médica Continuada , Feminino , Humanos , Modelos Lineares , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Ruído , Pessoas com Deficiência Auditiva , Razão Sinal-Ruído
16.
J Am Acad Audiol ; 24(4): 307-28, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636211

RESUMO

BACKGROUND: Hearing-impaired (HI) individuals with similar ages and audiograms often demonstrate substantial differences in speech-reception performance in noise. Traditional models of speech intelligibility focus primarily on average performance for a given audiogram, failing to account for differences between listeners with similar audiograms. Improved prediction accuracy might be achieved by simulating differences in the distortion that speech may undergo when processed through an impaired ear. Although some attempts to model particular suprathreshold distortions can explain general speech-reception deficits not accounted for by audibility limitations, little has been done to model suprathreshold distortion and predict speech-reception performance for individual HI listeners. Auditory-processing models incorporating individualized measures of auditory distortion, along with audiometric thresholds, could provide a more complete understanding of speech-reception deficits by HI individuals. A computational model capable of predicting individual differences in speech-recognition performance would be a valuable tool in the development and evaluation of hearing-aid signal-processing algorithms for enhancing speech intelligibility. PURPOSE: This study investigated whether biologically inspired models simulating peripheral auditory processing for individual HI listeners produce more accurate predictions of speech-recognition performance than audiogram-based models. RESEARCH DESIGN: Psychophysical data on spectral and temporal acuity were incorporated into individualized auditory-processing models consisting of three stages: a peripheral stage, customized to reflect individual audiograms and spectral and temporal acuity; a cortical stage, which extracts spectral and temporal modulations relevant to speech; and an evaluation stage, which predicts speech-recognition performance by comparing the modulation content of clean and noisy speech. To investigate the impact of different aspects of peripheral processing on speech predictions, individualized details (absolute thresholds, frequency selectivity, spectrotemporal modulation [STM] sensitivity, compression) were incorporated progressively, culminating in a model simulating level-dependent spectral resolution and dynamic-range compression. STUDY SAMPLE: Psychophysical and speech-reception data from 11 HI and six normal-hearing listeners were used to develop the models. DATA COLLECTION AND ANALYSIS: Eleven individualized HI models were constructed and validated against psychophysical measures of threshold, frequency resolution, compression, and STM sensitivity. Speech-intelligibility predictions were compared with measured performance in stationary speech-shaped noise at signal-to-noise ratios (SNRs) of -6, -3, 0, and 3 dB. Prediction accuracy for the individualized HI models was compared to the traditional audibility-based Speech Intelligibility Index (SII). RESULTS: Models incorporating individualized measures of STM sensitivity yielded significantly more accurate within-SNR predictions than the SII. Additional individualized characteristics (frequency selectivity, compression) improved the predictions only marginally. A nonlinear model including individualized level-dependent cochlear-filter bandwidths, dynamic-range compression, and STM sensitivity predicted performance more accurately than the SII but was no more accurate than a simpler linear model. Predictions of speech-recognition performance simultaneously across SNRs and individuals were also significantly better for some of the auditory-processing models than for the SII. CONCLUSIONS: A computational model simulating individualized suprathreshold auditory-processing abilities produced more accurate speech-intelligibility predictions than the audibility-based SII. Most of this advantage was realized by a linear model incorporating audiometric and STM-sensitivity information. Although more consistent with known physiological aspects of auditory processing, modeling level-dependent changes in frequency selectivity and gain did not result in more accurate predictions of speech-reception performance.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Modelos Biológicos , Distorção da Percepção/fisiologia , Percepção da Fala/fisiologia , Algoritmos , Audiometria , Córtex Auditivo/fisiologia , Cóclea/fisiologia , Cognição/fisiologia , Feminino , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Modelos Lineares , Masculino , Ruído , Psicoacústica , Testes de Discriminação da Fala
18.
J Acoust Soc Am ; 132(3): 1646-54, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22978893

RESUMO

This study measured the influence of masker fluctuations on phoneme recognition. The first part of the study compared the benefit of masker modulations for consonant and vowel recognition in normal-hearing (NH) listeners. Recognition scores were measured in steady-state and sinusoidally amplitude-modulated noise maskers (100% modulation depth) at several modulation rates and signal-to-noise ratios. Masker modulation rates were 4, 8, 16, and 32 Hz for the consonant recognition task and 2, 4, 12, and 32 Hz for the vowel recognition task. Vowel recognition scores showed more modulation benefit and a more pronounced effect of masker modulation rate than consonant scores. The modulation benefit for word recognition from other studies was found to be more similar to the benefit for vowel recognition than that for consonant recognition. The second part of the study measured the effect of modulation rate on the benefit of masker modulations for vowel recognition in aided hearing-impaired (HI) listeners. HI listeners achieved as much modulation benefit as NH listeners for slower masker modulation rates (2, 4, and 12 Hz), but showed a reduced benefit for the fast masker modulation rate of 32 Hz.


Assuntos
Correção de Deficiência Auditiva , Auxiliares de Audição , Perda Auditiva Bilateral/reabilitação , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Fonética , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Limiar Auditivo , Estudos de Casos e Controles , Correção de Deficiência Auditiva/psicologia , Feminino , Perda Auditiva Bilateral/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Pessoas com Deficiência Auditiva/psicologia , Espectrografia do Som , Fatores de Tempo
19.
J Acoust Soc Am ; 125(5): 3358-72, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19425676

RESUMO

Speech intelligibility for audio-alone and audiovisual (AV) sentences was estimated as a function of signal-to-noise ratio (SNR) for a female target talker presented in a stationary noise, an interfering male talker, or a speech-modulated noise background, for eight hearing-impaired (HI) and five normal-hearing (NH) listeners. At the 50% keywords-correct performance level, HI listeners showed 7-12 dB less fluctuating-masker benefit (FMB) than NH listeners, consistent with previous results. Both groups showed significantly more FMB under AV than audio-alone conditions. When compared at the same stationary-noise SNR, FMB differences between listener groups and modalities were substantially smaller, suggesting that most of the FMB differences at the 50% performance level may reflect a SNR dependence of the FMB. Still, 1-5 dB of the FMB difference between listener groups remained, indicating a possible role for reduced audibility, limited spectral or temporal resolution, or an inability to use auditory source-segregation cues, in directly limiting the ability to listen in the dips of a fluctuating masker. A modified version of the extended speech-intelligibility index that predicts a larger FMB at less favorable SNRs accounted for most of the FMB differences between listener groups and modalities. Overall, these data suggest that HI listeners retain more of an ability to listen in the dips of a fluctuating masker than previously thought. Instead, the fluctuating-masker difficulties exhibited by HI listeners may derive from the reduced FMB associated with the more favorable SNRs they require to identify a reasonable proportion of the target speech.


Assuntos
Perda Auditiva Neurossensorial/psicologia , Mascaramento Perceptivo , Inteligibilidade da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Ruído , Estimulação Luminosa , Psicoacústica , Espectrografia do Som , Fala , Análise e Desempenho de Tarefas
20.
J Am Acad Audiol ; 20(10): 607-20, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-20503799

RESUMO

BACKGROUND: Although the benefits of amplification for persons with impaired hearing are well established, many potential candidates do not obtain and use hearing aids. In some cases, this is because the individual is not convinced that amplification will be of sufficient benefit in those everyday listening situations where he or she is experiencing difficulties. PURPOSE: To describe the development of a naturalistic approach to assessing hearing aid candidacy and motivating hearing aid use based on patient preferences for unamplified and amplified sound samples typical of those encountered in everyday living and to assess the validity of these preference ratings to predict hearing aid candidacy. RESEARCH DESIGN: Prospective experimental study comparing preference ratings for unamplified and amplified sound samples of patients with a clinical recommendation for hearing aid use and patients for whom amplification was not prescribed. STUDY SAMPLE: Forty-eight adults self-referred to the Army Audiology and Speech Center for a hearing evaluation. DATA COLLECTION AND ANALYSIS: Unamplified and amplified sound samples were presented to potential hearing aid candidates using a three-alternative forced-choice paradigm. Participants were free to switch at will among the three processing options (no gain, mild gain, moderate gain) until the preferred option was determined. Following this task, each participant was seen for a diagnostic hearing evaluation by one of eight staff audiologists with no knowledge of the preference data. Patient preferences for the three processing options were used to predict the attending audiologists' recommendations for amplification based on traditional audiometric measures. RESULTS: Hearing aid candidacy was predicted with moderate accuracy from the patients' preferences for amplified sounds typical of those encountered in everyday living, although the predictive validity of the various sound samples varied widely. CONCLUSIONS: Preferences for amplified sounds were generally predictive of hearing aid candidacy. However, the predictive validity of the preference ratings was not sufficient to replace traditional clinical determinations of hearing aid candidacy in individual patients. Because the sound samples are common to patients' everyday listening experiences, they provide a quick and intuitive method of demonstrating the potential benefit of amplification to patients who might otherwise not accept a prescription for hearing aids.


Assuntos
Aconselhamento Diretivo/métodos , Auxiliares de Audição , Perda Auditiva/psicologia , Perda Auditiva/reabilitação , Motivação , Aceitação pelo Paciente de Cuidados de Saúde , Atividades Cotidianas , Adulto , Criança , Comportamento de Escolha , Feminino , Humanos , Masculino , Seleção de Pacientes , Estudos Prospectivos , Reprodutibilidade dos Testes , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...