Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
1.
Ear Hear ; 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39288360

RESUMEN

OBJECTIVES: We compared sound quality and performance for a conventional cochlear-implant (CI) audio processing strategy based on short-time fast-Fourier transform (Crystalis) and an experimental strategy based on spectral feature extraction (SFE). In the latter, the more salient spectral features (acoustic events) were extracted and mapped into the CI stimulation electrodes. We hypothesized that (1) SFE would be superior to Crystalis because it can encode acoustic spectral features without the constraints imposed by the short-time fast-Fourier transform bin width, and (2) the potential benefit of SFE would be greater for CI users who have less neural cross-channel interactions. DESIGN: To examine the first hypothesis, 6 users of Oticon Medical Digisonic SP CIs were tested in a double-blind design with the SFE and Crystalis strategies on various aspects: word recognition in quiet, speech-in-noise reception threshold (SRT), consonant discrimination in quiet, listening effort, melody contour identification (MCI), and subjective sound quality. Word recognition and SRTs were measured on the first and last day of testing (4 to 5 days apart) to assess potential learning and/or acclimatization effects. Other tests were run once between the first and last testing day. Listening effort was assessed by measuring pupil dilation. MCI involved identifying a five-tone contour among five possible contours. Sound quality was assessed subjectively using the multiple stimulus with hidden reference and anchor (MUSHRA) paradigm for sentences, music, and ambient sounds. To examine the second hypothesis, cross-channel interaction was assessed behaviorally using forward masking. RESULTS: Word recognition was similar for the two strategies on the first day of testing and improved for both strategies on the last day of testing, with Crystalis improving significantly more. SRTs were worse with SFE than Crystalis on the first day of testing but became comparable on the last day of testing. Consonant discrimination scores were higher for Crystalis than for the SFE strategy. MCI scores and listening effort were not substantially different across strategies. Subjective sound quality scores were lower for the SFE than for the Crystalis strategy. The difference in performance with SFE and Crystalis was greater for CI users with higher channel interaction. CONCLUSIONS: CI-user performance was similar with the SFE and Crystalis strategies. Longer acclimatization times may be required to reveal the full potential of the SFE strategy.

2.
J Acoust Soc Am ; 151(3): 1741, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35364964

RESUMEN

Many aspects of hearing function are negatively affected by background noise. Listeners, however, have some ability to adapt to background noise. For instance, the detection of pure tones and the recognition of isolated words embedded in noise can improve gradually as tones and words are delayed a few hundred milliseconds in the noise. While some evidence suggests that adaptation to noise could be mediated by the medial olivocochlear reflex, adaptation can occur for people who do not have a functional reflex. Since adaptation can facilitate hearing in noise, and hearing in noise is often harder for hearing-impaired than for normal-hearing listeners, it is conceivable that adaptation is impaired with hearing loss. It remains unclear, however, if and to what extent this is the case, or whether impaired adaptation contributes to the greater difficulties experienced by hearing-impaired listeners understanding speech in noise. Here, we review adaptation to noise, the mechanisms potentially contributing to this adaptation, and factors that might reduce the ability to adapt to background noise, including cochlear hearing loss, cochlear synaptopathy, aging, and noise exposure. The review highlights few knowns and many unknowns about adaptation to noise, and thus paves the way for further research on this topic.


Asunto(s)
Pérdida Auditiva Sensorineural , Percepción del Habla , Adaptación Fisiológica , Audición , Humanos , Ruido/efectos adversos
3.
J Neurosci ; 40(34): 6613-6623, 2020 08 19.
Artículo en Inglés | MEDLINE | ID: mdl-32680938

RESUMEN

Human hearing adapts to background noise, as evidenced by the fact that listeners recognize more isolated words when words are presented later rather than earlier in noise. This adaptation likely occurs because the leading noise shifts ("adapts") the dynamic range of auditory neurons, which can improve the neural encoding of speech spectral and temporal cues. Because neural dynamic range adaptation depends on stimulus-level statistics, here we investigated the importance of "statistical" adaptation for improving speech recognition in noisy backgrounds. We compared the recognition of noised-masked words in the presence and in the absence of adapting noise precursors whose level was either constant or was changing every 50 ms according to different statistical distributions. Adaptation was measured for 28 listeners (9 men) and was quantified as the recognition improvement in the precursor relative to the no-precursor condition. Adaptation was largest for constant-level precursors and did not occur for highly fluctuating precursors, even when the two types of precursors had the same mean level and both activated the medial olivocochlear reflex. Instantaneous amplitude compression of the highly fluctuating precursor produced as much adaptation as the constant-level precursor did without compression. Together, results suggest that noise adaptation in speech recognition is probably mediated by neural dynamic range adaptation to the most frequent sound level. Further, they suggest that auditory peripheral compression per se, rather than the medial olivocochlear reflex, could facilitate noise adaptation by reducing the level fluctuations in the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by noise adaptation. The neural mechanisms underlying this adaptation remain unclear. Here, we report some benefits of adaptation for word-in-noise recognition and show that (1) adaptation occurs for stationary but not for highly fluctuating precursors with equal mean level; (2) both stationary and highly fluctuating noises activate the medial olivocochlear reflex; and (3) adaptation occurs even for highly fluctuating precursors when the stimuli are passed through a fast amplitude compressor. These findings suggest that noise adaptation reflects neural dynamic range adaptation to the most frequent noise level and that auditory peripheral compression, rather than the medial olivocochlear reflex, could facilitate noise adaptation.


Asunto(s)
Adaptación Fisiológica , Ruido , Percepción del Habla/fisiología , Adulto , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Neuronas/fisiología , Relación Señal-Ruido , Adulto Joven
4.
Ear Hear ; 41(6): 1492-1510, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136626

RESUMEN

OBJECTIVES: Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN: Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS: In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS: Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Humanos , Reflejo , Habla
5.
J Neurosci ; 38(17): 4138-4145, 2018 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-29593051

RESUMEN

Sensory systems constantly adapt their responses to the current environment. In hearing, adaptation may facilitate communication in noisy settings, a benefit frequently (but controversially) attributed to the medial olivocochlear reflex (MOCR) enhancing the neural representation of speech. Here, we show that human listeners (N = 14; five male) recognize more words presented monaurally in ipsilateral, contralateral, and bilateral noise when they are given some time to adapt to the noise. This finding challenges models and theories that claim that speech intelligibility in noise is invariant over time. In addition, we show that this adaptation to the noise occurs also for words processed to maintain the slow-amplitude modulations in speech (the envelope) disregarding the faster fluctuations (the temporal fine structure). This demonstrates that noise adaptation reflects an enhancement of amplitude modulation speech cues and is unaffected by temporal fine structure cues. Last, we show that cochlear implant users (N = 7; four male) show normal monaural adaptation to ipsilateral noise. Because the electrical stimulation delivered by cochlear implants is independent from the MOCR, this demonstrates that noise adaptation does not require the MOCR. We argue that noise adaptation probably reflects adaptation of the dynamic range of auditory neurons to the noise level statistics.SIGNIFICANCE STATEMENT People find it easier to understand speech in noisy environments when they are given some time to adapt to the noise. This benefit is frequently but controversially attributed to the medial olivocochlear efferent reflex enhancing the representation of speech cues in the auditory nerve. Here, we show that the adaptation to noise reflects an enhancement of the slow fluctuations in amplitude over time that are present in speech. In addition, we show that adaptation to noise for cochlear implant users is not statistically different from that for listeners with normal hearing. Because the electrical stimulation delivered by cochlear implants is independent from the medial olivocochlear efferent reflex, this demonstrates that adaptation to noise does not require this reflex.


Asunto(s)
Adaptación Fisiológica , Núcleo Coclear/fisiología , Núcleo Olivar/fisiología , Reflejo , Percepción del Habla , Adulto , Implantes Cocleares , Núcleo Coclear/citología , Femenino , Humanos , Masculino , Neuronas Eferentes/fisiología , Ruido , Núcleo Olivar/citología
6.
J Acoust Soc Am ; 143(4): 2217, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29716283

RESUMEN

It has been recently shown that cochlear implant users could enjoy better speech reception in noise and enhanced spatial unmasking with binaural audio processing inspired by the inhibitory effects of the contralateral medial olivocochlear (MOC) reflex on compression [Lopez-Poveda, Eustaquio-Martin, Stohl, Wolford, Schatzer, and Wilson (2016). Ear Hear. 37, e138-e148]. The perceptual evidence supporting those benefits, however, is limited to a few target-interferer spatial configurations and to a particular implementation of contralateral MOC inhibition. Here, the short-term objective intelligibility index is used to (1) objectively demonstrate potential benefits over many more spatial configurations, and (2) investigate if the predicted benefits may be enhanced by using more realistic MOC implementations. Results corroborate the advantages and drawbacks of MOC processing indicated by the previously published perceptual tests. The results also suggest that the benefits may be enhanced and the drawbacks overcome by using longer time constants for the activation and deactivation of inhibition and, to a lesser extent, by using a comparatively greater inhibition in the lower than in the higher frequency channels. Compared to using two functionally independent processors, the better MOC processor improved the signal-to-noise ratio in the two ears between 1 and 6 decibels by enhancing head-shadow effects, and was advantageous for all tested target-interferer spatial configurations.


Asunto(s)
Vías Auditivas/fisiología , Implantes Cocleares/normas , Sordera/rehabilitación , Reflejo , Sonido , Percepción del Habla/fisiología , Nervio Coclear/fisiología , Humanos , Núcleo Olivar/fisiología , Enmascaramiento Perceptual , Relación Señal-Ruido
7.
Ear Hear ; 37(3): e138-48, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26862711

RESUMEN

OBJECTIVES: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. DESIGN: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. RESULTS: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. CONCLUSIONS: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera/rehabilitación , Reflejo , Percepción del Habla , Femenino , Humanos , Masculino , Programas Informáticos
8.
Adv Exp Med Biol ; 894: 105-114, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27080651

RESUMEN

Our two ears do not function as fixed and independent sound receptors; their functioning is coupled and dynamically adjusted via the contralateral medial olivocochlear efferent reflex (MOCR). The MOCR possibly facilitates speech recognition in noisy environments. Such a role, however, is yet to be demonstrated because selective deactivation of the reflex during natural acoustic listening has not been possible for human subjects up until now. Here, we propose that this and other roles of the MOCR may be elucidated using the unique stimulus controls provided by cochlear implants (CIs). Pairs of sound processors were constructed to mimic or not mimic the effects of the contralateral MOCR with CIs. For the non-mimicking condition (STD strategy), the two processors in a pair functioned independently of each other. When configured to mimic the effects of the MOCR (MOC strategy), however, the two processors communicated with each other and the amount of compression in a given frequency channel of each processor in the pair decreased with increases in the output energy from the contralateral processor. The analysis of output signals from the STD and MOC strategies suggests that in natural binaural listening, the MOCR possibly causes a small reduction of audibility but enhances frequency-specific inter-aural level differences and the segregation of spatially non-overlapping sound sources. The proposed MOC strategy could improve the performance of CI and hearing-aid users.


Asunto(s)
Cóclea/fisiología , Implantes Cocleares , Audición/fisiología , Reflejo Acústico/fisiología , Humanos
9.
Trends Hear ; 28: 23312165241227818, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38291713

RESUMEN

The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Animales , Humanos , Umbral Auditivo/fisiología , Ruido/efectos adversos , Estimulación Acústica , Percepción Auditiva/fisiología
10.
Hear Res ; 441: 108917, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38061268

RESUMEN

Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.


Asunto(s)
Percepción del Habla , Voz , Humanos , Masculino , Femenino , Inteligibilidad del Habla , Lenguaje , Acústica
11.
Hear Res ; 451: 109080, 2024 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-39004016

RESUMEN

Auditory masking methods originally employed to assess behavioral frequency selectivity have evolved over the years to infer cochlear tuning. Behavioral forward masking thresholds for spectrally notched noise maskers and a fixed, low-level probe tone provide accurate estimates of cochlear tuning. Here, we use this method to investigate the effect of stimulus duration on human cochlear tuning at 500 Hz and 4 kHz. Probes were 20-ms sinusoids at 10 dB sensation level. Maskers were noises with a spectral notch symmetrically and asymmetrically placed around the probe frequency. For seven participants with normal hearing, masker levels at masking threshold were measured in forward masking for various notch widths and for masker durations of 30 and 400 ms. Measurements were fitted assuming rounded exponential filter shapes and the power spectrum model of masking, and equivalent rectangular bandwidths (ERBs) were inferred from the fits. At 4 kHz, masker thresholds were higher for the shorter maskers but ERBs were not significantly different for the two masker durations (ERB30ms=294 Hz vs. ERB400ms=277 Hz). At 500 Hz, by contrast, notched-noise curves were shallower for the 30-ms than the 400-ms masker, and ERBs were significantly broader for the shorter masker (ERB30ms=126 Hz vs. ERB400ms=55 Hz). We discuss possible factors that may underlay the duration effect at low frequencies and argue that it may not be possible to fully control for those factors. We conclude that tuning estimates are not affected by maker duration at high frequencies but should be measured and interpreted with caution at low frequencies.


Asunto(s)
Estimulación Acústica , Umbral Auditivo , Cóclea , Ruido , Enmascaramiento Perceptual , Humanos , Cóclea/fisiología , Adulto , Masculino , Femenino , Factores de Tiempo , Ruido/efectos adversos , Adulto Joven
12.
Hear Res ; 453: 109111, 2024 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-39276590

RESUMEN

Cochlear tuning and hence auditory frequency selectivity are thought to change in noisy environments by activation of the medial olivocochlear reflex (MOCR). In humans, auditory frequency selectivity is often assessed using psychoacoustical tuning curves (PTCs), a plot of the level required for pure-tone maskers to just mask a fixed-level pure-tone probe as a function of masker frequency. Sometimes, however, the stimuli used to measure a PTC are long enough that they can activate the MOCR by themselves and thus affect the PTC. Here, PTCs for probe frequencies of 500 Hz and 4 kHz were measured in forward masking using short maskers (30 ms) and probes (10 ms) to minimize the activation of the MOCR by the maskers or the probes. PTCs were also measured in the presence of long (300 ms) ipsilateral, contralateral, and bilateral broadband noise precursors to investigate the effect of the ipsilateral, contralateral, and bilateral MOCR on PTC tuning. Four listeners with normal hearing participated in the experiments. At 500 Hz, ipsilateral and bilateral precursors sharpened the PTCs by decreasing the thresholds for maskers with frequencies at or near the probe frequency with minimal effects on thresholds for maskers remote in frequency from the probe. At 4 kHz, by contrast, ipsilateral and bilateral precursors barely affected thresholds for maskers near the probe frequency but broadened PTCs by reducing thresholds for maskers far from the probe. Contralateral precursors barely affected PTCs. An existing computational model was used to interpret the results. The model suggested that despite the apparent differences, the pattern of results is consistent with the ipsilateral and bilateral MOCR inhibiting the cochlear gain similarly at the two probe frequencies and more strongly than the contralateral MOCR.

13.
Hear Res ; 443: 108963, 2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38308936

RESUMEN

Exposure to brief, intense sound can produce profound changes in the auditory system, from the internal structure of inner hair cells to reduced synaptic connections between the auditory nerves and the inner hair cells. Moreover, noisy environments can also lead to alterations in the auditory nerve or to processing changes in the auditory midbrain, all without affecting hearing thresholds. This so-called hidden hearing loss (HHL) has been shown in tinnitus patients and has been posited to account for hearing difficulties in noisy environments. However, much of the neuronal research thus far has investigated how HHL affects the response characteristics of individual fibres in the auditory nerve, as opposed to higher stations in the auditory pathway. Human models show that the auditory nerve encodes sound stochastically. Therefore, a sufficient reduction in nerve fibres could result in lowering the sampling of the acoustic scene below the minimum rate necessary to fully encode the scene, thus reducing the efficacy of sound encoding. Here, we examine how HHL affects the responses to frequency and intensity of neurons in the inferior colliculus of rats, and the duration and firing rate of those responses. Finally, we examined how shorter stimuli are encoded less effectively by the auditory midbrain than longer stimuli, and how this could lead to a clinical test for HHL.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Colículos Inferiores , Humanos , Ratas , Animales , Colículos Inferiores/fisiología , Ruido/efectos adversos , Umbral Auditivo/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Cóclea
14.
Trends Hear ; 28: 23312165241266322, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39267369

RESUMEN

Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: -0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.


Asunto(s)
Estimulación Acústica , Umbral Auditivo , Ruido , Enmascaramiento Perceptual , Reconocimiento en Psicología , Percepción del Habla , Ruido/efectos adversos , Adaptación Fisiológica/fisiología , Señales (Psicología) , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Prueba del Umbral de Recepción del Habla , Inteligibilidad del Habla , Espectrografía del Sonido
15.
Adv Exp Med Biol ; 787: 47-54, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23716208

RESUMEN

In binaural listening, the two cochleae do not act as independent sound receptors; their functioning is linked via the contralateral medial olivo-cochlear reflex (MOCR), which can be activated by contralateral sounds. The present study aimed at characterizing the effect of a contralateral white noise (CWN) on psychophysical tuning curves (PTCs). PTCs were measured in forward masking for probe frequencies of 500 Hz and 4 kHz, with and without CWN. The sound pressure level of the probe was fixed across conditions. PTCs for different response criteria were measured by using various masker-probe time gaps. The CWN had no significant effects on PTCs at 4 kHz. At 500 Hz, by contrast, PTCs measured with CWN appeared broader, particularly for short gaps, and they showed a decrease in the masker level. This decrease was greater the longer the masker-probe time gap. A computer model of forward masking with efferent control of cochlear gain was used to explain the data. The model accounted for the data based on the assumption that the sole effect of the CWN was to reduce the cochlear gain by ∼6.5 dB at 500 Hz for low and moderate levels. It also suggested that the pattern of data at 500 Hz is the result of combined broad bandwidth of compression and off-frequency listening. Results are discussed in relation with other physiological and psychoacoustical studies on the effect of activation of MOCR on cochlear function.


Asunto(s)
Percepción Auditiva/fisiología , Cóclea/fisiología , Simulación por Computador , Modelos Biológicos , Psicoacústica , Estimulación Acústica/métodos , Conducta , Vías Eferentes/fisiología , Lateralidad Funcional/fisiología , Humanos , Enmascaramiento Perceptual/fisiología
16.
Hear Res ; 432: 108743, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37003080

RESUMEN

We have recently proposed a binaural sound pre-processing method to attenuate sounds contralateral to each ear and shown that it can improve speech intelligibility for normal-hearing (NH) people in simulated "cocktail party" listening situations (Lopez-Poveda et al., 2022, Hear Res 418:108,469). The aim here was to evaluate if this benefit remains for hearing-impaired listeners when the method is combined with two independently functioning hearing aids, one per ear. Twelve volunteers participated in the experiments; five of them had bilateral sensorineural hearing loss and seven were NH listeners with simulated bilateral conductive hearing loss. Speech reception thresholds (SRTs) for sentences in competition with a source of steady, speech-shaped noise were measured in unilateral and bilateral listening, and for (target, masker) azimuthal angles of (0°, 0°), (270°, 45°), and (270°, 90°). Stimuli were processed through a pair of software-based multichannel, fast-acting, wide dynamic range compressors, with and without binaural pre-processing. For spatially collocated target and masker sources at 0° azimuth, the pre-processing did not affect SRTs. For spatially separated target and masker sources, the pre-processing improved SRTs when listening bilaterally (improvements up to 10.7 dB) or unilaterally with the acoustically better ear (improvements up to 13.9 dB), while it worsened SRTs when listening unilaterally with the acoustically worse ear (decrements of up to 17.0 dB). Results show that binaural pre-processing for contralateral sound attenuation can improve speech-in-noise intelligibility in laboratory tests also for bilateral hearing-aid users.


Asunto(s)
Implantes Cocleares , Audífonos , Percepción del Habla , Humanos , Inteligibilidad del Habla , Ruido/efectos adversos , Audición
17.
Hear Res ; 432: 108744, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37004271

RESUMEN

Computational models are useful tools to investigate scientific questions that would be complicated to address using an experimental approach. In the context of cochlear-implants (CIs), being able to simulate the neural activity evoked by these devices could help in understanding their limitations to provide natural hearing. Here, we present a computational modelling framework to quantify the transmission of information from sound to spikes in the auditory nerve of a CI user. The framework includes a model to simulate the electrical current waveform sensed by each auditory nerve fiber (electrode-neuron interface), followed by a model to simulate the timing at which a nerve fiber spikes in response to a current waveform (auditory nerve fiber model). Information theory is then applied to determine the amount of information transmitted from a suitable reference signal (e.g., the acoustic stimulus) to a simulated population of auditory nerve fibers. As a use case example, the framework is applied to simulate published data on modulation detection by CI users obtained using direct stimulation via a single electrode. Current spread as well as the number of fibers were varied independently to illustrate the framework capabilities. Simulations reasonably matched experimental data and suggested that the encoded modulation information is proportional to the total neural response. They also suggested that amplitude modulation is well encoded in the auditory nerve for modulation rates up to 1000 Hz and that the variability in modulation sensitivity across CI users is partly because different CI users use different references for detecting modulation.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Estimulación Acústica , Nervio Coclear/fisiología , Simulación por Computador , Estimulación Eléctrica , Potenciales Evocados Auditivos/fisiología
18.
Trends Hear ; 27: 23312165231213191, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37956654

RESUMEN

Older people often show auditory temporal processing deficits and speech-in-noise intelligibility difficulties even when their audiogram is clinically normal. The causes of such problems remain unclear. Some studies have suggested that for people with normal audiograms, age-related hearing impairments may be due to a cognitive decline, while others have suggested that they may be caused by cochlear synaptopathy. Here, we explore an alternative hypothesis, namely that age-related hearing deficits are associated with decreased inhibition. For human adults (N = 30) selected to cover a reasonably wide age range (25-59 years), with normal audiograms and normal cognitive function, we measured speech reception thresholds in noise (SRTNs) for disyllabic words, gap detection thresholds (GDTs), and frequency modulation detection thresholds (FMDTs). We also measured the rate of growth (slope) of auditory brainstem response wave-I amplitude with increasing level as an indirect indicator of cochlear synaptopathy, and the interference inhibition score in the Stroop color and word test (SCWT) as a proxy for inhibition. As expected, performance in the auditory tasks worsened (SRTNs, GDTs, and FMDTs increased), and wave-I slope and SCWT inhibition scores decreased with ageing. Importantly, SRTNs, GDTs, and FMDTs were not related to wave-I slope but worsened with decreasing SCWT inhibition. Furthermore, after partialling out the effect of SCWT inhibition, age was no longer related to SRTNs or GDTs and became less strongly related to FMDTs. Altogether, results suggest that for people with normal audiograms, age-related deficits in auditory temporal processing and speech-in-noise intelligibility are mediated by decreased inhibition rather than cochlear synaptopathy.


Asunto(s)
Presbiacusia , Percepción del Habla , Adulto , Humanos , Anciano , Persona de Mediana Edad , Umbral Auditivo/fisiología , Cóclea , Audición , Percepción Auditiva/fisiología , Presbiacusia/diagnóstico , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Percepción del Habla/fisiología
19.
Hear Res ; 418: 108469, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35263696

RESUMEN

Understanding speech presented in competition with other sounds can be challenging. Here, we reason that in free-field settings, this task can be facilitated by attenuating the sound field contralateral to each ear and propose to achieve this by linear subtraction of the weighted contralateral stimulus. We mathematically justify setting the weight equal to the ratio of ipsilateral to contralateral head-related transfer functions (HRTFs) averaged over an appropriate azimuth range. The algorithm is implemented in the frequency domain and evaluated technically and experimentally for normal-hearing listeners in simulated free-field conditions. Results show that (1) it can substantially improve the signal-to-noise ratio (up to 30 dB) and the short-term objective intelligibility in the ear ipsilateral to the target source, particularly for maskers with speech-like spectra; (2) it can improve speech reception thresholds (SRTs) for sentences in competition with speech-shaped noise by up to 8.5 dB in bilateral listening and 10.0 dB in unilateral listening; (3) for sentences in competition with speech maskers and in bilateral listening, it can improve SRTs by 2 to 5 dB, depending on the number and location of the masker sources; (4) it hardly affects virtual sound-source lateralization; and (5) the improvements, and the algorithm's directivity pattern depend on the azimuth range used to calculate the weights. Contralateral HRTF-weighted subtraction may prove valuable for users of binaural hearing devices.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Implantación Coclear/métodos , Ruido/efectos adversos , Habla
20.
Hear Res ; 426: 108621, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36182814

RESUMEN

We report a theoretical study aimed at investigating the impact of cochlear synapse loss (synaptopathy) on the encoding of the envelope (ENV) and temporal fine structure (TFS) of sounds by the population of auditory nerve fibers. A computational model was used to simulate auditory-nerve spike trains evoked by sinusoidally amplitude-modulated (AM) tones at 10 Hz with various carrier frequencies and levels. The model included 16 cochlear channels with characteristic frequencies (CFs) from 250 Hz to 8 kHz. Each channel was innervated by 3, 4 and 10 fibers with low (LSR), medium (MSR), and high spontaneous rates (HSR), respectively. For each channel, spike trains were collapsed into three separate 'population' post-stimulus time histograms (PSTHs), one per fiber type. Information theory was applied to reconstruct the stimulus waveform, ENV, and TFS from one or more PSTHs in a mathematically optimal way. The quality of the reconstruction was regarded as an estimate of the information present in the used PSTHs. Various synaptopathy scenarios were simulated by removing fibers of specific types and/or cochlear regions before stimulus reconstruction. We found that the TFS was predominantly encoded by HSR fibers at all stimulus carrier frequencies and levels. The encoding of the ENV was more complex. At lower levels, the ENV was predominantly encoded by HSR fibers with CFs near the stimulus carrier frequency. At higher levels, the ENV was equally well or better encoded by HSR fibers with CFs different from the AM carrier frequency as by LSR fibers with CFs at the carrier frequency. Altogether, findings suggest that a healthy population of HSR fibers (i.e., including fibers with CFs around and remote from the AM carrier frequency) might be sufficient to encode the ENV and TFS over a wide range of stimulus levels. Findings are discussed regarding their relevance for diagnosing synaptopathy using non-invasive ENV- and TFS-based measures.


Asunto(s)
Humanos , Nervio Coclear/fisiología , Cóclea/fisiología , Sonido , Estimulación Acústica
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda