Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
J Acoust Soc Am ; 151(3): 1741, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35364964

RESUMEN

Many aspects of hearing function are negatively affected by background noise. Listeners, however, have some ability to adapt to background noise. For instance, the detection of pure tones and the recognition of isolated words embedded in noise can improve gradually as tones and words are delayed a few hundred milliseconds in the noise. While some evidence suggests that adaptation to noise could be mediated by the medial olivocochlear reflex, adaptation can occur for people who do not have a functional reflex. Since adaptation can facilitate hearing in noise, and hearing in noise is often harder for hearing-impaired than for normal-hearing listeners, it is conceivable that adaptation is impaired with hearing loss. It remains unclear, however, if and to what extent this is the case, or whether impaired adaptation contributes to the greater difficulties experienced by hearing-impaired listeners understanding speech in noise. Here, we review adaptation to noise, the mechanisms potentially contributing to this adaptation, and factors that might reduce the ability to adapt to background noise, including cochlear hearing loss, cochlear synaptopathy, aging, and noise exposure. The review highlights few knowns and many unknowns about adaptation to noise, and thus paves the way for further research on this topic.


Asunto(s)
Pérdida Auditiva Sensorineural , Percepción del Habla , Adaptación Fisiológica , Audición , Humanos , Ruido/efectos adversos
2.
J Neurosci ; 40(34): 6613-6623, 2020 08 19.
Artículo en Inglés | MEDLINE | ID: mdl-32680938

RESUMEN

Human hearing adapts to background noise, as evidenced by the fact that listeners recognize more isolated words when words are presented later rather than earlier in noise. This adaptation likely occurs because the leading noise shifts ("adapts") the dynamic range of auditory neurons, which can improve the neural encoding of speech spectral and temporal cues. Because neural dynamic range adaptation depends on stimulus-level statistics, here we investigated the importance of "statistical" adaptation for improving speech recognition in noisy backgrounds. We compared the recognition of noised-masked words in the presence and in the absence of adapting noise precursors whose level was either constant or was changing every 50 ms according to different statistical distributions. Adaptation was measured for 28 listeners (9 men) and was quantified as the recognition improvement in the precursor relative to the no-precursor condition. Adaptation was largest for constant-level precursors and did not occur for highly fluctuating precursors, even when the two types of precursors had the same mean level and both activated the medial olivocochlear reflex. Instantaneous amplitude compression of the highly fluctuating precursor produced as much adaptation as the constant-level precursor did without compression. Together, results suggest that noise adaptation in speech recognition is probably mediated by neural dynamic range adaptation to the most frequent sound level. Further, they suggest that auditory peripheral compression per se, rather than the medial olivocochlear reflex, could facilitate noise adaptation by reducing the level fluctuations in the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by noise adaptation. The neural mechanisms underlying this adaptation remain unclear. Here, we report some benefits of adaptation for word-in-noise recognition and show that (1) adaptation occurs for stationary but not for highly fluctuating precursors with equal mean level; (2) both stationary and highly fluctuating noises activate the medial olivocochlear reflex; and (3) adaptation occurs even for highly fluctuating precursors when the stimuli are passed through a fast amplitude compressor. These findings suggest that noise adaptation reflects neural dynamic range adaptation to the most frequent noise level and that auditory peripheral compression, rather than the medial olivocochlear reflex, could facilitate noise adaptation.


Asunto(s)
Adaptación Fisiológica , Ruido , Percepción del Habla/fisiología , Adulto , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Neuronas/fisiología , Relación Señal-Ruido , Adulto Joven
3.
Ear Hear ; 41(6): 1492-1510, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136626

RESUMEN

OBJECTIVES: Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN: Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS: In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS: Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Humanos , Reflejo , Habla
4.
J Neurosci ; 38(17): 4138-4145, 2018 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-29593051

RESUMEN

Sensory systems constantly adapt their responses to the current environment. In hearing, adaptation may facilitate communication in noisy settings, a benefit frequently (but controversially) attributed to the medial olivocochlear reflex (MOCR) enhancing the neural representation of speech. Here, we show that human listeners (N = 14; five male) recognize more words presented monaurally in ipsilateral, contralateral, and bilateral noise when they are given some time to adapt to the noise. This finding challenges models and theories that claim that speech intelligibility in noise is invariant over time. In addition, we show that this adaptation to the noise occurs also for words processed to maintain the slow-amplitude modulations in speech (the envelope) disregarding the faster fluctuations (the temporal fine structure). This demonstrates that noise adaptation reflects an enhancement of amplitude modulation speech cues and is unaffected by temporal fine structure cues. Last, we show that cochlear implant users (N = 7; four male) show normal monaural adaptation to ipsilateral noise. Because the electrical stimulation delivered by cochlear implants is independent from the MOCR, this demonstrates that noise adaptation does not require the MOCR. We argue that noise adaptation probably reflects adaptation of the dynamic range of auditory neurons to the noise level statistics.SIGNIFICANCE STATEMENT People find it easier to understand speech in noisy environments when they are given some time to adapt to the noise. This benefit is frequently but controversially attributed to the medial olivocochlear efferent reflex enhancing the representation of speech cues in the auditory nerve. Here, we show that the adaptation to noise reflects an enhancement of the slow fluctuations in amplitude over time that are present in speech. In addition, we show that adaptation to noise for cochlear implant users is not statistically different from that for listeners with normal hearing. Because the electrical stimulation delivered by cochlear implants is independent from the medial olivocochlear efferent reflex, this demonstrates that adaptation to noise does not require this reflex.


Asunto(s)
Adaptación Fisiológica , Núcleo Coclear/fisiología , Núcleo Olivar/fisiología , Reflejo , Percepción del Habla , Adulto , Implantes Cocleares , Núcleo Coclear/citología , Femenino , Humanos , Masculino , Neuronas Eferentes/fisiología , Ruido , Núcleo Olivar/citología
5.
J Acoust Soc Am ; 143(4): 2217, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29716283

RESUMEN

It has been recently shown that cochlear implant users could enjoy better speech reception in noise and enhanced spatial unmasking with binaural audio processing inspired by the inhibitory effects of the contralateral medial olivocochlear (MOC) reflex on compression [Lopez-Poveda, Eustaquio-Martin, Stohl, Wolford, Schatzer, and Wilson (2016). Ear Hear. 37, e138-e148]. The perceptual evidence supporting those benefits, however, is limited to a few target-interferer spatial configurations and to a particular implementation of contralateral MOC inhibition. Here, the short-term objective intelligibility index is used to (1) objectively demonstrate potential benefits over many more spatial configurations, and (2) investigate if the predicted benefits may be enhanced by using more realistic MOC implementations. Results corroborate the advantages and drawbacks of MOC processing indicated by the previously published perceptual tests. The results also suggest that the benefits may be enhanced and the drawbacks overcome by using longer time constants for the activation and deactivation of inhibition and, to a lesser extent, by using a comparatively greater inhibition in the lower than in the higher frequency channels. Compared to using two functionally independent processors, the better MOC processor improved the signal-to-noise ratio in the two ears between 1 and 6 decibels by enhancing head-shadow effects, and was advantageous for all tested target-interferer spatial configurations.


Asunto(s)
Vías Auditivas/fisiología , Implantes Cocleares/normas , Sordera/rehabilitación , Reflejo , Sonido , Percepción del Habla/fisiología , Nervio Coclear/fisiología , Humanos , Núcleo Olivar/fisiología , Enmascaramiento Perceptual , Relación Señal-Ruido
6.
Ear Hear ; 37(3): e138-48, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26862711

RESUMEN

OBJECTIVES: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. DESIGN: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. RESULTS: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. CONCLUSIONS: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera/rehabilitación , Reflejo , Percepción del Habla , Femenino , Humanos , Masculino , Programas Informáticos
7.
Adv Exp Med Biol ; 894: 105-114, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27080651

RESUMEN

Our two ears do not function as fixed and independent sound receptors; their functioning is coupled and dynamically adjusted via the contralateral medial olivocochlear efferent reflex (MOCR). The MOCR possibly facilitates speech recognition in noisy environments. Such a role, however, is yet to be demonstrated because selective deactivation of the reflex during natural acoustic listening has not been possible for human subjects up until now. Here, we propose that this and other roles of the MOCR may be elucidated using the unique stimulus controls provided by cochlear implants (CIs). Pairs of sound processors were constructed to mimic or not mimic the effects of the contralateral MOCR with CIs. For the non-mimicking condition (STD strategy), the two processors in a pair functioned independently of each other. When configured to mimic the effects of the MOCR (MOC strategy), however, the two processors communicated with each other and the amount of compression in a given frequency channel of each processor in the pair decreased with increases in the output energy from the contralateral processor. The analysis of output signals from the STD and MOC strategies suggests that in natural binaural listening, the MOCR possibly causes a small reduction of audibility but enhances frequency-specific inter-aural level differences and the segregation of spatially non-overlapping sound sources. The proposed MOC strategy could improve the performance of CI and hearing-aid users.


Asunto(s)
Cóclea/fisiología , Implantes Cocleares , Audición/fisiología , Reflejo Acústico/fisiología , Humanos
8.
Trends Hear ; 28: 23312165241227818, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38291713

RESUMEN

The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Animales , Humanos , Umbral Auditivo/fisiología , Ruido/efectos adversos , Estimulación Acústica , Percepción Auditiva/fisiología
9.
Hear Res ; 441: 108917, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38061268

RESUMEN

Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.


Asunto(s)
Percepción del Habla , Voz , Humanos , Masculino , Femenino , Inteligibilidad del Habla , Lenguaje , Acústica
10.
Hear Res ; 451: 109080, 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-39004016

RESUMEN

Auditory masking methods originally employed to assess behavioral frequency selectivity have evolved over the years to infer cochlear tuning. Behavioral forward masking thresholds for spectrally notched noise maskers and a fixed, low-level probe tone provide accurate estimates of cochlear tuning. Here, we use this method to investigate the effect of stimulus duration on human cochlear tuning at 500 Hz and 4 kHz. Probes were 20-ms sinusoids at 10 dB sensation level. Maskers were noises with a spectral notch symmetrically and asymmetrically placed around the probe frequency. For seven participants with normal hearing, masker levels at masking threshold were measured in forward masking for various notch widths and for masker durations of 30 and 400 ms. Measurements were fitted assuming rounded exponential filter shapes and the power spectrum model of masking, and equivalent rectangular bandwidths (ERBs) were inferred from the fits. At 4 kHz, masker thresholds were higher for the shorter maskers but ERBs were not significantly different for the two masker durations (ERB30ms=294 Hz vs. ERB400ms=277 Hz). At 500 Hz, by contrast, notched-noise curves were shallower for the 30-ms than the 400-ms masker, and ERBs were significantly broader for the shorter masker (ERB30ms=126 Hz vs. ERB400ms=55 Hz). We discuss possible factors that may underlay the duration effect at low frequencies and argue that it may not be possible to fully control for those factors. We conclude that tuning estimates are not affected by maker duration at high frequencies but should be measured and interpreted with caution at low frequencies.

11.
Hear Res ; 443: 108963, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38308936

RESUMEN

Exposure to brief, intense sound can produce profound changes in the auditory system, from the internal structure of inner hair cells to reduced synaptic connections between the auditory nerves and the inner hair cells. Moreover, noisy environments can also lead to alterations in the auditory nerve or to processing changes in the auditory midbrain, all without affecting hearing thresholds. This so-called hidden hearing loss (HHL) has been shown in tinnitus patients and has been posited to account for hearing difficulties in noisy environments. However, much of the neuronal research thus far has investigated how HHL affects the response characteristics of individual fibres in the auditory nerve, as opposed to higher stations in the auditory pathway. Human models show that the auditory nerve encodes sound stochastically. Therefore, a sufficient reduction in nerve fibres could result in lowering the sampling of the acoustic scene below the minimum rate necessary to fully encode the scene, thus reducing the efficacy of sound encoding. Here, we examine how HHL affects the responses to frequency and intensity of neurons in the inferior colliculus of rats, and the duration and firing rate of those responses. Finally, we examined how shorter stimuli are encoded less effectively by the auditory midbrain than longer stimuli, and how this could lead to a clinical test for HHL.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Colículos Inferiores , Humanos , Ratas , Animales , Colículos Inferiores/fisiología , Ruido/efectos adversos , Umbral Auditivo/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Cóclea
12.
Adv Exp Med Biol ; 787: 47-54, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23716208

RESUMEN

In binaural listening, the two cochleae do not act as independent sound receptors; their functioning is linked via the contralateral medial olivo-cochlear reflex (MOCR), which can be activated by contralateral sounds. The present study aimed at characterizing the effect of a contralateral white noise (CWN) on psychophysical tuning curves (PTCs). PTCs were measured in forward masking for probe frequencies of 500 Hz and 4 kHz, with and without CWN. The sound pressure level of the probe was fixed across conditions. PTCs for different response criteria were measured by using various masker-probe time gaps. The CWN had no significant effects on PTCs at 4 kHz. At 500 Hz, by contrast, PTCs measured with CWN appeared broader, particularly for short gaps, and they showed a decrease in the masker level. This decrease was greater the longer the masker-probe time gap. A computer model of forward masking with efferent control of cochlear gain was used to explain the data. The model accounted for the data based on the assumption that the sole effect of the CWN was to reduce the cochlear gain by ∼6.5 dB at 500 Hz for low and moderate levels. It also suggested that the pattern of data at 500 Hz is the result of combined broad bandwidth of compression and off-frequency listening. Results are discussed in relation with other physiological and psychoacoustical studies on the effect of activation of MOCR on cochlear function.


Asunto(s)
Percepción Auditiva/fisiología , Cóclea/fisiología , Simulación por Computador , Modelos Biológicos , Psicoacústica , Estimulación Acústica/métodos , Conducta , Vías Eferentes/fisiología , Lateralidad Funcional/fisiología , Humanos , Enmascaramiento Perceptual/fisiología
13.
Hear Res ; 432: 108743, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37003080

RESUMEN

We have recently proposed a binaural sound pre-processing method to attenuate sounds contralateral to each ear and shown that it can improve speech intelligibility for normal-hearing (NH) people in simulated "cocktail party" listening situations (Lopez-Poveda et al., 2022, Hear Res 418:108,469). The aim here was to evaluate if this benefit remains for hearing-impaired listeners when the method is combined with two independently functioning hearing aids, one per ear. Twelve volunteers participated in the experiments; five of them had bilateral sensorineural hearing loss and seven were NH listeners with simulated bilateral conductive hearing loss. Speech reception thresholds (SRTs) for sentences in competition with a source of steady, speech-shaped noise were measured in unilateral and bilateral listening, and for (target, masker) azimuthal angles of (0°, 0°), (270°, 45°), and (270°, 90°). Stimuli were processed through a pair of software-based multichannel, fast-acting, wide dynamic range compressors, with and without binaural pre-processing. For spatially collocated target and masker sources at 0° azimuth, the pre-processing did not affect SRTs. For spatially separated target and masker sources, the pre-processing improved SRTs when listening bilaterally (improvements up to 10.7 dB) or unilaterally with the acoustically better ear (improvements up to 13.9 dB), while it worsened SRTs when listening unilaterally with the acoustically worse ear (decrements of up to 17.0 dB). Results show that binaural pre-processing for contralateral sound attenuation can improve speech-in-noise intelligibility in laboratory tests also for bilateral hearing-aid users.


Asunto(s)
Implantes Cocleares , Audífonos , Percepción del Habla , Humanos , Inteligibilidad del Habla , Ruido/efectos adversos , Audición
14.
Hear Res ; 432: 108744, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37004271

RESUMEN

Computational models are useful tools to investigate scientific questions that would be complicated to address using an experimental approach. In the context of cochlear-implants (CIs), being able to simulate the neural activity evoked by these devices could help in understanding their limitations to provide natural hearing. Here, we present a computational modelling framework to quantify the transmission of information from sound to spikes in the auditory nerve of a CI user. The framework includes a model to simulate the electrical current waveform sensed by each auditory nerve fiber (electrode-neuron interface), followed by a model to simulate the timing at which a nerve fiber spikes in response to a current waveform (auditory nerve fiber model). Information theory is then applied to determine the amount of information transmitted from a suitable reference signal (e.g., the acoustic stimulus) to a simulated population of auditory nerve fibers. As a use case example, the framework is applied to simulate published data on modulation detection by CI users obtained using direct stimulation via a single electrode. Current spread as well as the number of fibers were varied independently to illustrate the framework capabilities. Simulations reasonably matched experimental data and suggested that the encoded modulation information is proportional to the total neural response. They also suggested that amplitude modulation is well encoded in the auditory nerve for modulation rates up to 1000 Hz and that the variability in modulation sensitivity across CI users is partly because different CI users use different references for detecting modulation.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Estimulación Acústica , Nervio Coclear/fisiología , Simulación por Computador , Estimulación Eléctrica , Potenciales Evocados Auditivos/fisiología
15.
Trends Hear ; 27: 23312165231213191, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37956654

RESUMEN

Older people often show auditory temporal processing deficits and speech-in-noise intelligibility difficulties even when their audiogram is clinically normal. The causes of such problems remain unclear. Some studies have suggested that for people with normal audiograms, age-related hearing impairments may be due to a cognitive decline, while others have suggested that they may be caused by cochlear synaptopathy. Here, we explore an alternative hypothesis, namely that age-related hearing deficits are associated with decreased inhibition. For human adults (N = 30) selected to cover a reasonably wide age range (25-59 years), with normal audiograms and normal cognitive function, we measured speech reception thresholds in noise (SRTNs) for disyllabic words, gap detection thresholds (GDTs), and frequency modulation detection thresholds (FMDTs). We also measured the rate of growth (slope) of auditory brainstem response wave-I amplitude with increasing level as an indirect indicator of cochlear synaptopathy, and the interference inhibition score in the Stroop color and word test (SCWT) as a proxy for inhibition. As expected, performance in the auditory tasks worsened (SRTNs, GDTs, and FMDTs increased), and wave-I slope and SCWT inhibition scores decreased with ageing. Importantly, SRTNs, GDTs, and FMDTs were not related to wave-I slope but worsened with decreasing SCWT inhibition. Furthermore, after partialling out the effect of SCWT inhibition, age was no longer related to SRTNs or GDTs and became less strongly related to FMDTs. Altogether, results suggest that for people with normal audiograms, age-related deficits in auditory temporal processing and speech-in-noise intelligibility are mediated by decreased inhibition rather than cochlear synaptopathy.


Asunto(s)
Presbiacusia , Percepción del Habla , Adulto , Humanos , Anciano , Persona de Mediana Edad , Umbral Auditivo/fisiología , Cóclea , Audición , Percepción Auditiva/fisiología , Presbiacusia/diagnóstico , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Percepción del Habla/fisiología
16.
Hear Res ; 418: 108469, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35263696

RESUMEN

Understanding speech presented in competition with other sounds can be challenging. Here, we reason that in free-field settings, this task can be facilitated by attenuating the sound field contralateral to each ear and propose to achieve this by linear subtraction of the weighted contralateral stimulus. We mathematically justify setting the weight equal to the ratio of ipsilateral to contralateral head-related transfer functions (HRTFs) averaged over an appropriate azimuth range. The algorithm is implemented in the frequency domain and evaluated technically and experimentally for normal-hearing listeners in simulated free-field conditions. Results show that (1) it can substantially improve the signal-to-noise ratio (up to 30 dB) and the short-term objective intelligibility in the ear ipsilateral to the target source, particularly for maskers with speech-like spectra; (2) it can improve speech reception thresholds (SRTs) for sentences in competition with speech-shaped noise by up to 8.5 dB in bilateral listening and 10.0 dB in unilateral listening; (3) for sentences in competition with speech maskers and in bilateral listening, it can improve SRTs by 2 to 5 dB, depending on the number and location of the masker sources; (4) it hardly affects virtual sound-source lateralization; and (5) the improvements, and the algorithm's directivity pattern depend on the azimuth range used to calculate the weights. Contralateral HRTF-weighted subtraction may prove valuable for users of binaural hearing devices.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Implantación Coclear/métodos , Ruido/efectos adversos , Habla
17.
Hear Res ; 426: 108621, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36182814

RESUMEN

We report a theoretical study aimed at investigating the impact of cochlear synapse loss (synaptopathy) on the encoding of the envelope (ENV) and temporal fine structure (TFS) of sounds by the population of auditory nerve fibers. A computational model was used to simulate auditory-nerve spike trains evoked by sinusoidally amplitude-modulated (AM) tones at 10 Hz with various carrier frequencies and levels. The model included 16 cochlear channels with characteristic frequencies (CFs) from 250 Hz to 8 kHz. Each channel was innervated by 3, 4 and 10 fibers with low (LSR), medium (MSR), and high spontaneous rates (HSR), respectively. For each channel, spike trains were collapsed into three separate 'population' post-stimulus time histograms (PSTHs), one per fiber type. Information theory was applied to reconstruct the stimulus waveform, ENV, and TFS from one or more PSTHs in a mathematically optimal way. The quality of the reconstruction was regarded as an estimate of the information present in the used PSTHs. Various synaptopathy scenarios were simulated by removing fibers of specific types and/or cochlear regions before stimulus reconstruction. We found that the TFS was predominantly encoded by HSR fibers at all stimulus carrier frequencies and levels. The encoding of the ENV was more complex. At lower levels, the ENV was predominantly encoded by HSR fibers with CFs near the stimulus carrier frequency. At higher levels, the ENV was equally well or better encoded by HSR fibers with CFs different from the AM carrier frequency as by LSR fibers with CFs at the carrier frequency. Altogether, findings suggest that a healthy population of HSR fibers (i.e., including fibers with CFs around and remote from the AM carrier frequency) might be sufficient to encode the ENV and TFS over a wide range of stimulus levels. Findings are discussed regarding their relevance for diagnosing synaptopathy using non-invasive ENV- and TFS-based measures.


Asunto(s)
Humanos , Nervio Coclear/fisiología , Cóclea/fisiología , Sonido , Estimulación Acústica
18.
Hear Res ; 416: 108444, 2022 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-35078133

RESUMEN

Verbal communication in social environments often requires dividing attention between two or more simultaneous talkers. The ability to do this, however, may be diminished when the listener has limited access to acoustic cues or those cues are degraded, as is the case for hearing-impaired listeners or users of cochlear implants or hearing aids. The aim of the present study was to investigate the ability of normal-hearing (NH) listeners to divide their attention and recognize speech from two simultaneous talkers in simulated free-field listening conditions, with and without reduced acoustic cues. Participants (N = 11 or 12 depending on the experiment) were asked to recognize and repeat as many words as possible from two simultaneous, time-centered sentences uttered by a male and a female talker. In Experiment 1, the female and male talkers were located at 15° and +15°, 45° and +45°, or 90° and +90° azimuth, respectively. Speech was natural or processed through a noise vocoder and was presented at a comfortable loudness level (∼65 dB SPL). In Experiment 2, the female and male talkers were located at 45° and +45° azimuth, respectively. Speech was natural but was presented at a lower level (35 dB SPL) to reduce audibility. In Experiment 3, speech was vocoded and presented at a comfortable loudness level (∼65 dB SPL), but the location of the talkers was switched relative to Experiment 1 (i.e., the male and female talkers were at 45° and +45°, respectively) to reveal possible interactions of talker sex and location. Listeners recognized overall more natural words at a comfortable loudness level (76%) than vocoded words at a similar level (39%) or natural words at a lower level (43%). This indicates that recognition was more difficult for the two latter stimuli. On the other hand, listeners recognized roughly the same proportion of words (76%) from the two talkers when speech was natural and comfortable in loudness, but a greater proportion of words from the male than from the female talker when speech was vocoded (50% vs 27%, respectively) or was natural but lower in level (55% vs 32%, respectively). This asymmetry occurred and was similar for the three spatial configurations. These results suggest that divided listening becomes asymmetric when speech cues are reduced. They also suggest that listeners preferentially recognized the male talker, located on the right side of the head. Switching the talker's location produced similar recognition for the two talkers for vocoded speech, suggesting an interaction between talkers' location and their speech characteristics. For natural speech at comfortable loudness level, listeners can divide their attention almost equally between two simultaneous talkers. When speech cues are limited (as is the case for vocoded speech or for speech at low sensation level), by contrast, the ability to divide attention equally between talkers is diminished and listeners favor one of the talkers based on their location, sex, and/or speech characteristics. Findings are discussed in the context of limited cognitive capacity affecting dividing listening in difficult listening situations.


Asunto(s)
Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Acústica , Señales (Psicología) , Femenino , Humanos , Masculino , Habla
19.
iScience ; 24(6): 102658, 2021 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-34151241

RESUMEN

Central gain compensation for reduced auditory nerve output has been hypothesized as a mechanism for tinnitus with a normal audiogram. Here, we investigate if gain compensation occurs with aging. For 94 people (aged 12-68 years, 64 women, 7 tinnitus) with normal or close-to-normal audiograms, the amplitude of wave I of the auditory brainstem response decreased with increasing age but was not correlated with wave V amplitude after accounting for age-related subclinical hearing loss and cochlear damage, a result indicative of age-related gain compensation. The correlations between age and wave I/III or III/V amplitude ratios suggested that compensation occurs at the wave III generator site. For each one of the seven participants with non-pulsatile tinnitus, the amplitude of wave I, wave V, and the wave I/V amplitude ratio were well within the confidence limits of the non-tinnitus participants. We conclude that increased central gain occurs with aging and is not specific to tinnitus.

20.
Front Neurosci ; 15: 640127, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33664649

RESUMEN

The roles of the medial olivocochlear reflex (MOCR) in human hearing have been widely investigated but remain controversial. We reason that this may be because the effects of MOCR activation on cochlear mechanical responses can be assessed only indirectly in healthy humans, and the different methods used to assess those effects possibly yield different and/or unreliable estimates. One aim of this study was to investigate the correlation between three methods often employed to assess the strength of MOCR activation by contralateral acoustic stimulation (CAS). We measured tone detection thresholds (N = 28), click-evoked otoacoustic emission (CEOAE) input/output (I/O) curves (N = 18), and distortion-product otoacoustic emission (DPOAE) I/O curves (N = 18) for various test frequencies in the presence and the absence of CAS (broadband noise of 60 dB SPL). As expected, CAS worsened tone detection thresholds, suppressed CEOAEs and DPOAEs, and horizontally shifted CEOAE and DPOAE I/O curves to higher levels. However, the CAS effect on tone detection thresholds was not correlated with the horizontal shift of CEOAE or DPOAE I/O curves, and the CAS-induced CEOAE suppression was not correlated with DPOAE suppression. Only the horizontal shifts of CEOAE and DPOAE I/O functions were correlated with each other at 1.5, 2, and 3 kHz. A second aim was to investigate which of the methods is more reliable. The test-retest variability of the CAS effect was high overall but smallest for tone detection thresholds and CEOAEs, suggesting that their use should be prioritized over the use of DPOAEs. Many factors not related with the MOCR, including the limited parametric space studied, the low resolution of the I/O curves, and the reduced numbers of observations due to data exclusion likely contributed to the weak correlations and the large test-retest variability noted. These findings can help us understand the inconsistencies among past studies and improve our understanding of the functional significance of the MOCR.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA