RESUMEN
Cochlear implants (CIs) allow to restore the hearing function in profoundly deaf individuals. Due to the degradation of the stimulus by CI signal processing, implanted individuals with single-sided deafness (SSD) have the specific challenge that the input highly differs between their ears. The present study compared normal-hearing (NH) listeners (N = 10) and left- and right-ear implanted SSD CI users (N = 10 left, N = 9 right), to evaluate cortical speech processing between CI- and NH-ears and to explore for side-of-implantation effects. The participants performed a two-deviant oddball task, separately with the left and the right ear. Auditory event-related potentials (ERPs) in response to syllables were compared between proficient and non-proficient CI users, as well as between CI and NH ears. The effect of the side of implantation was analysed on the sensor and the source level. CI proficiency could be distinguished based on the ERP amplitudes of the N1 and the P3b. Moreover, syllable processing via the CI ear, when compared to the NH ear, resulted in attenuated and delayed ERPs. In addition, the left-ear implanted SSD CI users revealed an enhanced functional asymmetry in the auditory cortex than right-ear implanted SSD CI users, regardless of whether the syllables were perceived via the CI or the NH ear. Our findings reveal that speech-discrimination proficiency in SSD CI users can be assessed by N1 and P3b ERPs. The results contribute to a better understanding of the rehabilitation success in SSD CI users by showing that cortical speech processing in SSD CI users is affected by CI-related stimulus degradation and experience-related functional changes in the auditory cortex.
Asunto(s)
Corteza Auditiva , Implantación Coclear , Implantes Cocleares , Sordera , Pérdida Auditiva Unilateral , Percepción del Habla , Implantación Coclear/métodos , Humanos , Percepción del Habla/fisiologíaRESUMEN
OBJECTIVE: Hearing with a cochlear implant (CI) is difficult in noisy environments, but the use of noise reduction algorithms, specifically ForwardFocus, can improve speech intelligibility. The current event-related potentials (ERP) study examined the electrophysiological correlates of this perceptual improvement. METHODS: Ten bimodal CI users performed a syllable-identification task in auditory and audiovisual conditions, with syllables presented from the front and stationary noise presented from the sides. Brainstorm was used for spatio-temporal evaluation of ERPs. RESULTS: CI users revealed an audiovisual benefit as reflected by shorter response times and greater activation in temporal and occipital regions at P2 latency. However, in auditory and audiovisual conditions, background noise hampered speech processing, leading to longer response times and delayed auditory-cortex-activation at N1 latency. Nevertheless, activating ForwardFocus resulted in shorter response times, reduced listening effort and enhanced superior-frontal-cortex-activation at P2 latency, particularly in audiovisual conditions. CONCLUSIONS: ForwardFocus enhances speech intelligibility in audiovisual speech conditions by potentially allowing the reallocation of attentional resources to relevant auditory speech cues. SIGNIFICANCE: This study shows for CI users that background noise and ForwardFocus differentially affect spatio-temporal cortical response patterns, both in auditory and audiovisual speech conditions.
Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Potenciales Evocados , Ruido/efectos adversosRESUMEN
The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, "ACALES") as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort - at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.
RESUMEN
This study investigates the influence of temporal regularity on human listeners' ability to detect a repeating noise pattern embedded in statistically identical non-repeating noise. Human listeners were presented with white noise stimuli that either contained a frozen segment of noise that repeated in a temporally regular or irregular manner, or did not contain any repetition at all. Subjects were instructed to respond as soon as they detected any repetition in the stimulus. Pattern detection performance was best when repeated targets occurred in a temporally regular manner, suggesting that temporal regularity plays a facilitative role in pattern detection. A modulation filterbank model could account for these results.