Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 23.282
Filtrar
Más filtros

Publication year range
1.
Physiol Rev ; 103(2): 1025-1058, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36049112

RESUMEN

Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.


Asunto(s)
Corteza Auditiva , Animales , Estimulación Acústica , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Ruido , Adaptación Fisiológica/fisiología
2.
Proc Natl Acad Sci U S A ; 121(15): e2314763121, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38557194

RESUMEN

Although sudden sensorineural hearing loss (SSNHL) is a serious condition, there are currently no approved drugs for its treatment. Nevertheless, there is a growing understanding that the cochlear pathologies that underlie SSNHL include apoptotic death of sensory outer hair cells (OHCs) as well as loss of ribbon synapses connecting sensory inner hair cells (IHCs) and neurites of the auditory nerve, designated synaptopathy. Noise-induced hearing loss (NIHL) is a common subtype of SSNHL and is widely used to model hearing loss preclinically. Here, we demonstrate that a single interventive application of a small pyridoindole molecule (AC102) into the middle ear restored auditory function almost to prenoise levels in a guinea pig model of NIHL. AC102 prevented noise-triggered loss of OHCs and reduced IHC synaptopathy suggesting a role of AC102 in reconnecting auditory neurons to their sensory target cells. Notably, AC102 exerted its therapeutic properties over a wide frequency range. Such strong improvements in hearing have not previously been demonstrated for other therapeutic agents. In vitro experiments of a neuronal damage model revealed that AC102 protected cells from apoptosis and promoted neurite growth. These effects may be explained by increased production of adenosine triphosphate, indicating improved mitochondrial function, and reduced levels of reactive-oxygen species which prevents the apoptotic processes responsible for OHC death. This action profile of AC102 might be causal for the observed hearing recovery in in vivo models.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Pérdida Auditiva Sensorineural , Cobayas , Animales , Audición , Cóclea , Ruido/efectos adversos , Células Ciliadas Auditivas Externas/fisiología , Umbral Auditivo
3.
Proc Natl Acad Sci U S A ; 121(8): e2310561121, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38354264

RESUMEN

Exposure to loud noise triggers sensory organ damage and degeneration that, in turn, leads to hearing loss. Despite the troublesome impact of noise-induced hearing loss (NIHL) in individuals and societies, treatment strategies that protect and restore hearing are few and insufficient. As such, identification and mechanistic understanding of the signaling pathways involved in NIHL are required. Biological zinc is mostly bound to proteins, where it plays major structural or catalytic roles; however, there is also a pool of unbound, mobile (labile) zinc. Labile zinc is mostly found in vesicles in secretory tissues, where it is released and plays a critical signaling role. In the brain, labile zinc fine-tunes neurotransmission and sensory processing. However, injury-induced dysregulation of labile zinc signaling contributes to neurodegeneration. Here, we tested whether zinc dysregulation occurs and contributes to NIHL in mice. We found that ZnT3, the vesicular zinc transporter responsible for loading zinc into vesicles, is expressed in cochlear hair cells and the spiral limbus, with labile zinc also present in the same areas. Soon after noise trauma, ZnT3 and zinc levels are significantly increased, and their subcellular localization is vastly altered. Disruption of zinc signaling, either via ZnT3 deletion or pharmacological zinc chelation, mitigated NIHL, as evidenced by enhanced auditory brainstem responses, distortion product otoacoustic emissions, and number of hair cell synapses. These data reveal that noise-induced zinc dysregulation is associated with cochlear dysfunction and recovery after NIHL, and point to zinc chelation as a potential treatment for mitigating NIHL.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Ratones , Animales , Pérdida Auditiva Provocada por Ruido/tratamiento farmacológico , Zinc , Cóclea , Ruido/efectos adversos , Audición , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Umbral Auditivo
4.
Development ; 150(24)2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-38032004

RESUMEN

During development, cells are subject to stochastic fluctuations in their positions (i.e. cell-level noise) that can potentially lead to morphological noise (i.e. stochastic differences between morphologies that are expected to be equal, e.g. the right and left sides of bilateral organisms). In this study, we explore new and existing hypotheses on buffering mechanisms against cell-level noise. Many of these hypotheses focus on how the boundaries between territories of gene expression remain regular and well defined, despite cell-level noise and division. We study these hypotheses and how irregular territory boundaries lead to morphological noise. To determine the consistency of the different hypotheses, we use a general computational model of development: EmbryoMaker. EmbryoMaker can implement arbitrary gene networks regulating basic cell behaviors (contraction, adhesion, etc.), signaling and tissue biomechanics. We found that buffering mechanisms based on the orientation of cell divisions cannot lead to regular boundaries but that other buffering mechanisms can (homotypic adhesion, planar contraction, non-dividing boundaries, constant signaling and majority rule hypotheses). We also explore the effects of the shape and size of the territories on morphological noise.


Asunto(s)
Redes Reguladoras de Genes , Transducción de Señal , División Celular , Ruido , Fenómenos Biomecánicos , Procesos Estocásticos
5.
PLoS Biol ; 21(12): e3002410, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38064502

RESUMEN

Perception is known to cycle through periods of enhanced and reduced sensitivity to external information. Here, we asked whether such slow fluctuations arise as a noise-related epiphenomenon of limited processing capacity or, alternatively, represent a structured mechanism of perceptual inference. Using 2 large-scale datasets, we found that humans and mice alternate between externally and internally oriented modes of sensory analysis. During external mode, perception aligns more closely with the external sensory information, whereas internal mode is characterized by enhanced biases toward perceptual history. Computational modeling indicated that dynamic changes in mode are enabled by 2 interlinked factors: (i) the integration of subsequent inputs over time and (ii) slow antiphase oscillations in the impact of external sensory information versus internal predictions that are provided by perceptual history. We propose that between-mode fluctuations generate unambiguous error signals that enable optimal inference in volatile environments.


Asunto(s)
Ruido , Sensación , Humanos , Animales , Ratones , Percepción
6.
PLoS Biol ; 21(12): e3002366, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38091351

RESUMEN

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.


Asunto(s)
Corteza Auditiva , Redes Neurales de la Computación , Encéfalo , Audición , Percepción Auditiva/fisiología , Ruido , Corteza Auditiva/fisiología
7.
Nature ; 587(7835): 605-609, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33177710

RESUMEN

Expansion of anthropogenic noise and night lighting across our planet1,2 is of increasing conservation concern3-6. Despite growing knowledge of physiological and behavioural responses to these stimuli from single-species and local-scale studies, whether these pollutants affect fitness is less clear, as is how and why species vary in their sensitivity to these anthropic stressors. Here we leverage a large citizen science dataset paired with high-resolution noise and light data from across the contiguous United States to assess how these stimuli affect reproductive success in 142 bird species. We find responses to both sensory pollutants linked to the functional traits and habitat affiliations of species. For example, overall nest success was negatively correlated with noise among birds in closed environments. Species-specific changes in reproductive timing and hatching success in response to noise exposure were explained by vocalization frequency, nesting location and diet. Additionally, increased light-gathering ability of species' eyes was associated with stronger advancements in reproductive timing in response to light exposure, potentially creating phenological mismatches7. Unexpectedly, better light-gathering ability was linked to reduced clutch failure and increased overall nest success in response to light exposure, raising important questions about how responses to sensory pollutants counteract or exacerbate responses to other aspects of global change, such as climate warming. These findings demonstrate that anthropogenic noise and light can substantially affect breeding bird phenology and fitness, and underscore the need to consider sensory pollutants alongside traditional dimensions of the environment that typically inform biodiversity conservation.


Asunto(s)
Aves/fisiología , Iluminación/efectos adversos , Ruido/efectos adversos , Reproducción/efectos de la radiación , Animales , Aves/clasificación , Ciencia Ciudadana , Tamaño de la Nidada/efectos de la radiación , Espacios Confinados , Conjuntos de Datos como Asunto , Dieta/veterinaria , Ecosistema , Femenino , Mapeo Geográfico , Masculino , Comportamiento de Nidificación/fisiología , Comportamiento de Nidificación/efectos de la radiación , Fenómenos Fisiológicos Oculares/efectos de la radiación , Reproducción/fisiología , Especificidad de la Especie , Estados Unidos , Vocalización Animal/efectos de la radiación
8.
Proc Natl Acad Sci U S A ; 120(29): e2301463120, 2023 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-37428927

RESUMEN

Auditory perception is traditionally conceived as the perception of sounds-a friend's voice, a clap of thunder, a minor chord. However, daily life also seems to present us with experiences characterized by the absence of sound-a moment of silence, a gap between thunderclaps, the hush after a musical performance. In these cases, do we positively hear silence? Or do we just fail to hear, and merely judge or infer that it is silent? This longstanding question remains controversial in both the philosophy and science of perception, with prominent theories holding that sounds are the only objects of auditory experience and thus that our encounter with silence is cognitive, not perceptual. However, this debate has largely remained theoretical, without a key empirical test. Here, we introduce an empirical approach to this theoretical dispute, presenting experimental evidence that silence can be genuinely perceived (not just cognitively inferred). We ask whether silences can "substitute" for sounds in event-based auditory illusions-empirical signatures of auditory event representation in which auditory events distort perceived duration. Seven experiments introduce three "silence illusions"-the one-silence-is-more illusion, silence-based warping, and the oddball-silence illusion-each adapted from a prominent perceptual illusion previously thought to arise only from sounds. Subjects were immersed in ambient noise interrupted by silences structurally identical to the sounds in the original illusions. In all cases, silences elicited temporal distortions perfectly analogous to the illusions produced by sounds. Our results suggest that silence is truly heard, not merely inferred, introducing a general approach for studying the perception of absence.


Asunto(s)
Ilusiones , Humanos , Ruido , Sonido , Percepción Auditiva , Audición , Estimulación Acústica/métodos
9.
Proc Natl Acad Sci U S A ; 120(49): e2309166120, 2023 Dec 05.
Artículo en Inglés | MEDLINE | ID: mdl-38032934

RESUMEN

Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle the effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (nondegraded) version of the speech. This intermediate priming, which generates a "pop-out" percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affect acoustic and linguistic neural representations using multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. mTRFs analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex, in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Inteligibilidad del Habla/fisiología , Estimulación Acústica/métodos , Habla/fisiología , Ruido , Acústica , Magnetoencefalografía/métodos , Percepción del Habla/fisiología
10.
J Neurosci ; 44(3)2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-38050093

RESUMEN

Human visual performance for basic visual dimensions (e.g., contrast sensitivity and acuity) peaks at the fovea and decreases with eccentricity. The eccentricity effect is related to the larger visual cortical surface area corresponding to the fovea, but it is unknown if differential feature tuning contributes to this eccentricity effect. Here, we investigated two system-level computations underlying the eccentricity effect: featural representation (tuning) and internal noise. Observers (both sexes) detected a Gabor embedded in filtered white noise which appeared at the fovea or one of four perifoveal locations. We used psychophysical reverse correlation to estimate the weights assigned by the visual system to a range of orientations and spatial frequencies (SFs) in noisy stimuli, which are conventionally interpreted as perceptual sensitivity to the corresponding features. We found higher sensitivity to task-relevant orientations and SFs at the fovea than that at the perifovea, and no difference in selectivity for either orientation or SF. Concurrently, we measured response consistency using a double-pass method, which allowed us to infer the level of internal noise by implementing a noisy observer model. We found lower internal noise at the fovea than that at the perifovea. Finally, individual variability in contrast sensitivity correlated with sensitivity to and selectivity for task-relevant features as well as with internal noise. Moreover, the behavioral eccentricity effect mainly reflects the foveal advantage in orientation sensitivity compared with other computations. These findings suggest that the eccentricity effect stems from a better representation of task-relevant features and lower internal noise at the fovea than that at the perifovea.


Asunto(s)
Sensibilidad de Contraste , Corteza Visual , Masculino , Femenino , Humanos , Orientación/fisiología , Corteza Visual/fisiología , Fóvea Central/fisiología , Ruido
11.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38494418

RESUMEN

Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.


Asunto(s)
Lenguaje , Habla , Humanos , Femenino , Habla/fisiología , Lóbulo Temporal , Estimulación Magnética Transcraneal , Ruido
12.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38715408

RESUMEN

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Asunto(s)
Envejecimiento , Encéfalo , Comprensión , Ruido , Espectroscopía Infrarroja Corta , Percepción del Habla , Humanos , Adulto , Percepción del Habla/fisiología , Masculino , Femenino , Espectroscopía Infrarroja Corta/métodos , Persona de Mediana Edad , Adulto Joven , Anciano , Comprensión/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Envejecimiento/fisiología , Mapeo Encefálico/métodos , Estimulación Acústica/métodos
13.
Proc Natl Acad Sci U S A ; 119(13): e2114932119, 2022 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-35312354

RESUMEN

SignificanceAcoustic signals travel efficiently in the marine environment, allowing soniferous predators and prey to eavesdrop on each other. Our results with four cetacean species indicate that they use acoustic information to assess predation risk and have evolved mechanisms to reduce predation risk by ceasing foraging. Species that more readily gave up foraging in response to predatory sounds of killer whales also decreased foraging more during 1- to 4-kHz sonar exposures, indicating that species exhibiting costly antipredator responses also have stronger behavioral reactions to anthropogenic noise. This advance in our understanding of the drivers of disturbance helps us to predict what species and habitats are likely to be most severely impacted by underwater noise pollution in oceans undergoing increasing anthropogenic activities.


Asunto(s)
Ruido , Orca , Animales , Miedo , Conducta Predatoria , Sonido
14.
Proc Natl Acad Sci U S A ; 119(30): e2117809119, 2022 07 26.
Artículo en Inglés | MEDLINE | ID: mdl-35858414

RESUMEN

Animal communication is central to many animal societies, and effective signal transmission is crucial for individuals to survive and reproduce successfully. One environmental factor that exerts selection pressure on acoustic signals is ambient noise. To maintain signal efficiency, species can adjust signals through phenotypic plasticity or microevolutionary response to natural selection. One of these signal adjustments is the increase in signal amplitude, called the Lombard effect, which has been frequently found in birds and mammals. However, the evolutionary origin of the Lombard effect is largely unresolved. Using a phylogenetically controlled meta-analysis, we show that the Lombard effect is also present in fish and amphibians, and contradictory results in the literature can be explained by differences in signal-to-noise ratios among studies. Our analysis also demonstrates that subcortical processes are sufficient to elicit the Lombard effect and that amplitude adjustments do not require vocal learning. We conclude that the Lombard effect is a widespread mechanism based on phenotypic plasticity in vertebrates for coping with changes in ambient noise levels.


Asunto(s)
Evolución Biológica , Ruido , Vocalización Animal , Acústica , Animales , Mamíferos , Vertebrados/clasificación , Vocalización Animal/fisiología
15.
Proc Natl Acad Sci U S A ; 119(37): e2118163119, 2022 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-36067307

RESUMEN

Neurons can use different aspects of their spiking to simultaneously represent (multiplex) different features of a stimulus. For example, some pyramidal neurons in primary somatosensory cortex (S1) use the rate and timing of their spikes to, respectively, encode the intensity and frequency of vibrotactile stimuli. Doing so has several requirements. Because they fire at low rates, pyramidal neurons cannot entrain 1:1 with high-frequency (100 to 600 Hz) inputs and, instead, must skip (i.e., not respond to) some stimulus cycles. The proportion of skipped cycles must vary inversely with stimulus intensity for firing rate to encode stimulus intensity. Spikes must phase-lock to the stimulus for spike times (intervals) to encode stimulus frequency, but, in addition, skipping must occur irregularly to avoid aliasing. Using simulations and in vitro experiments in which mouse S1 pyramidal neurons were stimulated with inputs emulating those induced by vibrotactile stimuli, we show that fewer cycles are skipped as stimulus intensity increases, as required for rate coding, and that intrinsic or synaptic noise can induce irregular skipping without disrupting phase locking, as required for temporal coding. This occurs because noise can modulate the reliability without disrupting the precision of spikes evoked by small-amplitude, fast-onset signals. Specifically, in the fluctuation-driven regime associated with sparse spiking, rate and temporal coding are both paradoxically improved by the strong synaptic noise characteristic of the intact cortex. Our results demonstrate that multiplexed coding by S1 pyramidal neurons is not only feasible under in vivo conditions, but that background synaptic noise is actually beneficial.


Asunto(s)
Ruido , Células Piramidales , Corteza Somatosensorial , Tacto , Potenciales de Acción/fisiología , Animales , Ratones , Células Piramidales/fisiología , Reproducibilidad de los Resultados , Corteza Somatosensorial/fisiología , Tacto/fisiología , Vibración
16.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-37127361

RESUMEN

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Asunto(s)
Pupila , Percepción del Habla , Masculino , Femenino , Humanos , Percepción Auditiva , Ruido , Nivel de Alerta
17.
J Neurosci ; 43(25): 4642-4649, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37221095

RESUMEN

Auditory experience plays a critical role in hearing development. Developmental auditory deprivation because of otitis media, a common childhood disease, produces long-standing changes in the central auditory system, even after the middle ear pathology is resolved. The effects of sound deprivation because of otitis media have been mostly studied in the ascending auditory system but remain to be examined in the descending pathway that runs from the auditory cortex to the cochlea via the brainstem. Alterations in the efferent neural system could be important because the descending olivocochlear pathway influences the neural representation of transient sounds in noise in the afferent auditory system and is thought to be involved in auditory learning. Here, we show that the inhibitory strength of the medial olivocochlear efferents is weaker in children with a documented history of otitis media relative to controls; both boys and girls were included in the study. In addition, children with otitis media history required a higher signal-to-noise ratio on a sentence-in-noise recognition task than controls to achieve the same criterion performance level. Poorer speech-in-noise recognition, a hallmark of impaired central auditory processing, was related to efferent inhibition, and could not be attributed to the middle ear or cochlear mechanics.SIGNIFICANCE STATEMENT Otitis media is the second most common reason children go to the doctor. Previously, degraded auditory experience because of otitis media has been associated with reorganized ascending neural pathways, even after middle ear pathology resolved. Here, we show that altered afferent auditory input because of otitis media during childhood is also associated with long-lasting reduced descending neural pathway function and poorer speech-in-noise recognition. These novel, efferent findings may be important for the detection and treatment of childhood otitis media.


Asunto(s)
Audición , Otitis Media , Masculino , Femenino , Niño , Humanos , Retroalimentación , Ruido , Percepción Auditiva , Cóclea/fisiología , Vías Eferentes/fisiología
18.
J Neurosci ; 43(32): 5856-5869, 2023 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-37491313

RESUMEN

Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.


Asunto(s)
Percepción del Habla , Habla , Masculino , Femenino , Humanos , Anciano , Movimientos Oculares , Percepción del Habla/fisiología , Percepción Auditiva , Ruido , Inteligibilidad del Habla
19.
J Neurosci ; 43(17): 3107-3119, 2023 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-36931709

RESUMEN

Both passive tactile stimulation and motor actions result in dynamic changes in beta band (15-30 Hz Hz) oscillations over somatosensory cortex. Similar to alpha band (8-12 Hz) power decrease in the visual system, beta band power also decreases following stimulation of the somatosensory system. This relative suppression of α and ß oscillations is generally interpreted as an increase in cortical excitability. Here, next to traditional single-pulse stimuli, we used a random intensity continuous right index finger tactile stimulation (white noise), which enabled us to uncover an impulse response function of the somatosensory system. Contrary to previous findings, we demonstrate a burst-like initial increase rather than decrease of beta activity following white noise stimulation (human participants, N = 18, 8 female). These ß bursts, on average, lasted for 3 cycles, and their frequency was correlated with resonant frequency of somatosensory cortex, as measured by a multifrequency steady-state somatosensory evoked potential paradigm. Furthermore, beta band bursts shared spectro-temporal characteristics with evoked and resting-state ß oscillations. Together, our findings not only reveal a novel oscillatory signature of somatosensory processing that mimics the previously reported visual impulse response functions, but also point to a common oscillatory generator underlying spontaneous ß bursts in the absence of tactile stimulation and phase-locked ß bursts following stimulation, the frequency of which is determined by the resonance properties of the somatosensory system.SIGNIFICANCE STATEMENT The investigation of the transient nature of oscillations has gained great popularity in recent years. The findings of bursting activity, rather than sustained oscillations in the beta band, have provided important insights into its role in movement planning, working memory, inhibition, and reactivation of neural ensembles. In this study, we show that also in response to tactile stimulation the somatosensory system responds with ∼3 cycle oscillatory beta band bursts, whose spectro-temporal characteristics are shared with evoked and resting-state beta band oscillatory signatures of the somatosensory system. As similar bursts have been observed in the visual domain, these oscillatory signatures might reflect an important supramodal mechanism in sensory processing.


Asunto(s)
Ritmo beta , Tacto , Humanos , Femenino , Tacto/fisiología , Ritmo beta/fisiología , Ruido , Corteza Somatosensorial/fisiología
20.
J Neurosci ; 43(25): 4580-4597, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37147134

RESUMEN

Exposure to combinations of environmental toxins is growing in prevalence; and therefore, understanding their interactions is of increasing societal importance. Here, we examined the mechanisms by which two environmental toxins, polychlorinated biphenyls (PCBs) and high-amplitude acoustic noise, interact to produce dysfunction in central auditory processing. PCBs are well established to impose negative developmental impacts on hearing. However, it is not known whether developmental exposure to this ototoxin alters the sensitivity to other ototoxic exposures later in life. Here, male mice were exposed to PCBs in utero, and later as adults were exposed to 45 min of high-intensity noise. We then examined the impacts of the two exposures on hearing and the organization of the auditory midbrain using two-photon imaging and analysis of the expression of mediators of oxidative stress. We observed that developmental exposure to PCBs blocked hearing recovery from acoustic trauma. In vivo two-photon imaging of the inferior colliculus (IC) revealed that this lack of recovery was associated with disruption of the tonotopic organization and reduction of inhibition in the auditory midbrain. In addition, expression analysis in the inferior colliculus revealed that reduced GABAergic inhibition was more prominent in animals with a lower capacity to mitigate oxidative stress. These data suggest that combined PCBs and noise exposure act nonlinearly to damage hearing and that this damage is associated with synaptic reorganization, and reduced capacity to limit oxidative stress. In addition, this work provides a new paradigm by which to understand nonlinear interactions between combinations of environmental toxins.SIGNIFICANCE STATEMENT Exposure to common environmental toxins is a large and growing problem in the population. This work provides a new mechanistic understanding of how the prenatal and postnatal developmental changes induced by polychlorinated biphenyls (PCBs) could negatively impact the resilience of the brain to noise-induced hearing loss (NIHL) later in adulthood. The use of state-of-the-art tools, including in vivo multiphoton microscopy of the midbrain helped in identifying the long-term central changes in the auditory system after the peripheral hearing damage induced by such environmental toxins. In addition, the novel combination of methods employed in this study will lead to additional advances in our understanding of mechanisms of central hearing loss in other contexts.


Asunto(s)
Pérdida Auditiva Provocada por Ruido , Colículos Inferiores , Bifenilos Policlorados , Femenino , Embarazo , Masculino , Ratones , Animales , Colículos Inferiores/fisiología , Bifenilos Policlorados/toxicidad , Ruido/efectos adversos , Audición , Estimulación Acústica/métodos
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda