Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Cereb Cortex ; 30(3): 1603-1622, 2020 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-31667491

RESUMEN

The mouse auditory cortex (ACtx) contains two core fields-primary auditory cortex (A1) and anterior auditory field (AAF)-arranged in a mirror reversal tonotopic gradient. The best frequency (BF) organization and naming scheme for additional higher order fields remain a matter of debate, as does the correspondence between smoothly varying global tonotopy and heterogeneity in local cellular tuning. Here, we performed chronic widefield and two-photon calcium imaging from the ACtx of awake Thy1-GCaMP6s reporter mice. Data-driven parcellation of widefield maps identified five fields, including a previously unidentified area at the ventral posterior extreme of the ACtx (VPAF) and a tonotopically organized suprarhinal auditory field (SRAF) that extended laterally as far as ectorhinal cortex. Widefield maps were stable over time, where single pixel BFs fluctuated by less than 0.5 octaves throughout a 1-month imaging period. After accounting for neuropil signal and frequency tuning strength, BF organization in neighboring layer 2/3 neurons was intermediate to the heterogeneous salt and pepper organization and the highly precise local organization that have each been described in prior studies. Multiscale imaging data suggest there is no ultrasonic field or secondary auditory cortex in the mouse. Instead, VPAF and a dorsal posterior (DP) field emerged as the strongest candidates for higher order auditory areas.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Sonido , Estimulación Acústica/métodos , Animales , Corteza Auditiva/patología , Encéfalo/fisiología , Mapeo Encefálico/métodos , Femenino , Masculino , Ratones , Neuronas/fisiología
2.
J Acoust Soc Am ; 145(1): 440, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30710924

RESUMEN

The ability to identify the words spoken by one talker masked by two or four competing talkers was tested in young-adult listeners with sensorineural hearing loss (SNHL). In a reference/baseline condition, masking speech was colocated with target speech, target and masker talkers were female, and the masker was intelligible. Three comparison conditions included replacing female masker talkers with males, time-reversal of masker speech, and spatial separation of sources. All three variables produced significant release from masking. To emulate energetic masking (EM), stimuli were subjected to ideal time-frequency segregation retaining only the time-frequency units where target energy exceeded masker energy. Subjects were then tested with these resynthesized "glimpsed stimuli." For either two or four maskers, thresholds only varied about 3 dB across conditions suggesting that EM was roughly equal. Compared to normal-hearing listeners from an earlier study [Kidd, Mason, Swaminathan, Roverud, Clayton, and Best, J. Acoust. Soc. Am. 140, 132-144 (2016)], SNHL listeners demonstrated both greater energetic and informational masking as well as higher glimpsed thresholds. Individual differences were correlated across masking release conditions suggesting that listeners could be categorized according to their general ability to solve the task. Overall, both peripheral and central factors appear to contribute to the higher thresholds for SNHL listeners.


Asunto(s)
Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla , Adolescente , Adulto , Umbral Auditivo , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Enmascaramiento Perceptual
4.
J Acoust Soc Am ; 140(1): 132, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-27475139

RESUMEN

Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated "glimpses" were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions.


Asunto(s)
Enmascaramiento Perceptual , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adulto , Umbral Auditivo/fisiología , Femenino , Audición , Humanos , Masculino , Factores Sexuales , Habla , Factores de Tiempo , Adulto Joven
5.
Curr Biol ; 34(8): 1605-1620.e5, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38492568

RESUMEN

Sound elicits rapid movements of muscles in the face, ears, and eyes that protect the body from injury and trigger brain-wide internal state changes. Here, we performed quantitative facial videography from mice resting atop a piezoelectric force plate and observed that broadband sounds elicited rapid and stereotyped facial twitches. Facial motion energy (FME) adjacent to the whisker array was 30 dB more sensitive than the acoustic startle reflex and offered greater inter-trial and inter-animal reliability than sound-evoked pupil dilations or movement of other facial and body regions. FME tracked the low-frequency envelope of broadband sounds, providing a means to study behavioral discrimination of complex auditory stimuli, such as speech phonemes in noise. Approximately 25% of layer 5-6 units in the auditory cortex (ACtx) exhibited firing rate changes during facial movements. However, FME facilitation during ACtx photoinhibition indicated that sound-evoked facial movements were mediated by a midbrain pathway and modulated by descending corticofugal input. FME and auditory brainstem response (ABR) thresholds were closely aligned after noise-induced sensorineural hearing loss, yet FME growth slopes were disproportionately steep at spared frequencies, reflecting a central plasticity that matched commensurate changes in ABR wave 4. Sound-evoked facial movements were also hypersensitive in Ptchd1 knockout mice, highlighting the use of FME for identifying sensory hyper-reactivity phenotypes after adult-onset hyperacusis and inherited deficiencies in autism risk genes. These findings present a sensitive and integrative measure of hearing while also highlighting that even low-intensity broadband sounds can elicit a complex mixture of auditory, motor, and reafferent somatosensory neural activity.


Asunto(s)
Audición , Animales , Ratones , Masculino , Audición/fisiología , Sonido , Estimulación Acústica , Femenino , Corteza Auditiva/fisiología , Ratones Endogámicos C57BL , Movimiento , Potenciales Evocados Auditivos del Tronco Encefálico
6.
bioRxiv ; 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38853938

RESUMEN

Parvalbumin-expressing inhibitory neurons (PVNs) stabilize cortical network activity, generate gamma rhythms, and regulate experience-dependent plasticity. Here, we observed that activation or inactivation of PVNs functioned like a volume knob in the mouse auditory cortex (ACtx), turning neural and behavioral classification of sound level up or down over a 20dB range. PVN loudness adjustments were "sticky", such that a single bout of 40Hz PVN stimulation sustainably suppressed ACtx sound responsiveness, potentiated feedforward inhibition, and behaviorally desensitized mice to loudness. Sensory sensitivity is a cardinal feature of autism, aging, and peripheral neuropathy, prompting us to ask whether PVN stimulation can persistently desensitize mice with ACtx hyperactivity, PVN hypofunction, and loudness hypersensitivity triggered by cochlear sensorineural damage. We found that a single 16-minute bout of 40Hz PVN stimulation session restored normal loudness perception for one week, showing that perceptual deficits triggered by irreversible peripheral injuries can be reversed through targeted cortical circuit interventions.

7.
Elife ; 112022 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-36111669

RESUMEN

Neurons in sensory cortex exhibit a remarkable capacity to maintain stable firing rates despite large fluctuations in afferent activity levels. However, sudden peripheral deafferentation in adulthood can trigger an excessive, non-homeostatic cortical compensatory response that may underlie perceptual disorders including sensory hypersensitivity, phantom limb pain, and tinnitus. Here, we show that mice with noise-induced damage of the high-frequency cochlear base were behaviorally hypersensitive to spared mid-frequency tones and to direct optogenetic stimulation of auditory thalamocortical neurons. Chronic two-photon calcium imaging from ACtx pyramidal neurons (PyrNs) revealed an initial stage of spatially diffuse hyperactivity, hyper-correlation, and auditory hyperresponsivity that consolidated around deafferented map regions three or more days after acoustic trauma. Deafferented PyrN ensembles also displayed hypersensitive decoding of spared mid-frequency tones that mirrored behavioral hypersensitivity, suggesting that non-homeostatic regulation of cortical sound intensity coding following sensorineural loss may be an underlying source of auditory hypersensitivity. Excess cortical response gain after acoustic trauma was expressed heterogeneously among individual PyrNs, yet 40% of this variability could be accounted for by each cell's baseline response properties prior to acoustic trauma. PyrNs with initially high spontaneous activity and gradual monotonic intensity growth functions were more likely to exhibit non-homeostatic excess gain after acoustic trauma. This suggests that while cortical gain changes are triggered by reduced bottom-up afferent input, their subsequent stabilization is also shaped by their local circuit milieu, where indicators of reduced inhibition can presage pathological hyperactivity following sensorineural hearing loss.


Asunto(s)
Corteza Auditiva , Pérdida Auditiva Provocada por Ruido , Acúfeno , Estimulación Acústica , Animales , Calcio , Cóclea , Ratones , Ruido
8.
Front Neurosci ; 15: 666627, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34305516

RESUMEN

The massive network of descending corticofugal projections has been long-recognized by anatomists, but their functional contributions to sound processing and auditory-guided behaviors remain a mystery. Most efforts to characterize the auditory corticofugal system have been inductive; wherein function is inferred from a few studies employing a wide range of methods to manipulate varying limbs of the descending system in a variety of species and preparations. An alternative approach, which we focus on here, is to first establish auditory-guided behaviors that reflect the contribution of top-down influences on auditory perception. To this end, we postulate that auditory corticofugal systems may contribute to active listening behaviors in which the timing of bottom-up sound cues can be predicted from top-down signals arising from cross-modal cues, temporal integration, or self-initiated movements. Here, we describe a behavioral framework for investigating how auditory perceptual performance is enhanced when subjects can anticipate the timing of upcoming target sounds. Our first paradigm, studied both in human subjects and mice, reports species-specific differences in visually cued expectation of sound onset in a signal-in-noise detection task. A second paradigm performed in mice reveals the benefits of temporal regularity as a perceptual grouping cue when detecting repeating target tones in complex background noise. A final behavioral approach demonstrates significant improvements in frequency discrimination threshold and perceptual sensitivity when auditory targets are presented at a predictable temporal interval following motor self-initiation of the trial. Collectively, these three behavioral approaches identify paradigms to study top-down influences on sound perception that are amenable to head-fixed preparations in genetically tractable animals, where it is possible to monitor and manipulate particular nodes of the descending auditory pathway with unparalleled precision.

9.
Curr Biol ; 31(2): 310-321.e5, 2021 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-33157020

RESUMEN

Corticothalamic (CT) neurons comprise the largest component of the descending sensory corticofugal pathway, but their contributions to brain function and behavior remain an unsolved mystery. To address the hypothesis that layer 6 (L6) CTs may be activated by extra-sensory inputs prior to anticipated sounds, we performed optogenetically targeted single-unit recordings and two-photon imaging of Ntsr1-Cre+ L6 CT neurons in the primary auditory cortex (A1) while mice were engaged in an active listening task. We found that L6 CTs and other L6 units began spiking hundreds of milliseconds prior to orofacial movements linked to sound presentation and reward, but not to other movements such as locomotion, which were not linked to an explicit behavioral task. Rabies tracing of monosynaptic inputs to A1 L6 CT neurons revealed a narrow strip of cholinergic and non-cholinergic projection neurons in the external globus pallidus, suggesting a potential source of motor-related input. These findings identify new pathways and local circuits for motor modulation of sound processing and suggest a new role for CT neurons in active sensing.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Movimiento/fisiología , Tálamo/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/citología , Globo Pálido/fisiología , Microscopía Intravital , Masculino , Ratones , Vías Nerviosas/fisiología , Neuronas/fisiología , Imagen Óptica , Recompensa , Técnicas Estereotáxicas , Tálamo/citología
10.
PLoS One ; 11(7): e0157638, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27384330

RESUMEN

The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".


Asunto(s)
Atención , Función Ejecutiva , Música , Estimulación Acústica , Adolescente , Adulto , Cognición/fisiología , Aprendizaje Discriminativo , Femenino , Audición , Humanos , Masculino , Memoria a Corto Plazo/fisiología , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Análisis y Desempeño de Tareas , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA