RESUMO
Williams-Beuren syndrome (WBS) is a rare disorder caused by hemizygous microdeletion of â¼27 contiguous genes. Despite neurodevelopmental and cognitive deficits, individuals with WBS have spared or enhanced musical and auditory abilities, potentially offering an insight into the genetic basis of auditory perception. Here, we report that the mouse models of WBS have innately enhanced frequency-discrimination acuity and improved frequency coding in the auditory cortex (ACx). Chemogenetic rescue showed frequency-discrimination hyperacuity is caused by hyperexcitable interneurons in the ACx. Haploinsufficiency of one WBS gene, Gtf2ird1, replicated WBS phenotypes by downregulating the neuropeptide receptor VIPR1. VIPR1 is reduced in the ACx of individuals with WBS and in the cerebral organoids derived from human induced pluripotent stem cells with the WBS microdeletion. Vipr1 deletion or overexpression in ACx interneurons mimicked or reversed, respectively, the cellular and behavioral phenotypes of WBS mice. Thus, the Gtf2ird1-Vipr1 mechanism in ACx interneurons may underlie the superior auditory acuity in WBS.
Assuntos
Córtex Auditivo/fisiologia , Síndrome de Williams/fisiopatologia , Animais , Córtex Auditivo/citologia , Modelos Animais de Doenças , Humanos , Células-Tronco Pluripotentes Induzidas , Interneurônios/citologia , Interneurônios/fisiologia , Camundongos , Fenótipo , Transativadores/genética , Síndrome de Williams/genéticaRESUMO
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Assuntos
Córtex Auditivo/fisiologia , Fala/fisiologia , Audiometria de Tons Puros , Eletrodos , Processamento Eletrônico de Dados , Humanos , Fonética , Percepção da Altura Sonora , Tempo de Reação/fisiologia , Lobo Temporal/fisiologiaRESUMO
Auditory processing in mammals begins in the peripheral inner ear and extends to the auditory cortex. Sound is transduced from mechanical stimuli into electrochemical signals of hair cells, which relay auditory information via the primary auditory neurons to cochlear nuclei. Information is subsequently processed in the superior olivary complex, lateral lemniscus, and inferior colliculus and projects to the auditory cortex via the medial geniculate body in the thalamus. Recent advances have provided valuable insights into the development and functioning of auditory structures, complementing our understanding of the physiological mechanisms underlying auditory processing. This comprehensive review explores the genetic mechanisms required for auditory system development from the peripheral cochlea to the auditory cortex. We highlight transcription factors and other genes with key recurring and interacting roles in guiding auditory system development and organization. Understanding these gene regulatory networks holds promise for developing novel therapeutic strategies for hearing disorders, benefiting millions globally.
Assuntos
Vias Auditivas , Audição , Animais , Audição/fisiologia , Vias Auditivas/fisiologia , Humanos , Encéfalo/metabolismo , Encéfalo/crescimento & desenvolvimento , Córtex Auditivo/metabolismo , Córtex Auditivo/fisiologiaRESUMO
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Assuntos
Córtex Auditivo , Animais , Estimulação Acústica , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ruído , Adaptação Fisiológica/fisiologiaRESUMO
Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.
Assuntos
Córtex Auditivo , Neurônios , Percepção da Fala , Lobo Temporal , Humanos , Estimulação Acústica , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Fonética , Fala , Percepção da Fala/fisiologia , Lobo Temporal/citologia , Lobo Temporal/fisiologia , Sinais (Psicologia) , EletrodosRESUMO
Cochlear implants (CIs) are neuroprosthetic devices that can provide hearing to deaf people1. Despite the benefits offered by CIs, the time taken for hearing to be restored and perceptual accuracy after long-term CI use remain highly variable2,3. CI use is believed to require neuroplasticity in the central auditory system, and differential engagement of neuroplastic mechanisms might contribute to the variability in outcomes4-7. Despite extensive studies on how CIs activate the auditory system4,8-12, the understanding of CI-related neuroplasticity remains limited. One potent factor enabling plasticity is the neuromodulator noradrenaline from the brainstem locus coeruleus (LC). Here we examine behavioural responses and neural activity in LC and auditory cortex of deafened rats fitted with multi-channel CIs. The rats were trained on a reward-based auditory task, and showed considerable individual differences of learning rates and maximum performance. LC photometry predicted when CI subjects began responding to sounds and longer-term perceptual accuracy. Optogenetic LC stimulation produced faster learning and higher long-term accuracy. Auditory cortical responses to CI stimulation reflected behavioural performance, with enhanced responses to rewarded stimuli and decreased distinction between unrewarded stimuli. Adequate engagement of central neuromodulatory systems is thus a potential clinically relevant target for optimizing neuroprosthetic device use.
Assuntos
Implantes Cocleares , Surdez , Locus Cerúleo , Animais , Ratos , Implante Coclear , Surdez/fisiopatologia , Surdez/terapia , Audição/fisiologia , Aprendizagem/fisiologia , Locus Cerúleo/citologia , Locus Cerúleo/fisiologia , Plasticidade Neuronal , Norepinefrina/metabolismo , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Córtex Auditivo/fisiopatologia , Neurônios/fisiologia , Recompensa , Optogenética , FotometriaRESUMO
Speech recognition crucially relies on slow temporal modulations (<16 Hz) in speech. Recent studies, however, have demonstrated that the long-delay echoes, which are common during online conferencing, can eliminate crucial temporal modulations in speech but do not affect speech intelligibility. Here, we investigated the underlying neural mechanisms. MEG experiments demonstrated that cortical activity can effectively track the temporal modulations eliminated by an echo, which cannot be fully explained by basic neural adaptation mechanisms. Furthermore, cortical responses to echoic speech can be better explained by a model that segregates speech from its echo than by a model that encodes echoic speech as a whole. The speech segregation effect was observed even when attention was diverted but would disappear when segregation cues, i.e., speech fine structure, were removed. These results strongly suggested that, through mechanisms such as stream segregation, the auditory system can build an echo-insensitive representation of speech envelope, which can support reliable speech recognition.
Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Inteligibilidade da Fala/fisiologia , Encéfalo , Córtex Auditivo/fisiologia , Atenção , Estimulação AcústicaRESUMO
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.
Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Fala , Retroalimentação , Eletroencefalografia/métodos , Córtex Auditivo/fisiologia , Estimulação Acústica/métodosRESUMO
Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.
Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Prazer , Recompensa , Humanos , Música/psicologia , Masculino , Feminino , Prazer/fisiologia , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Estimulação Acústica , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Aprendizado de MáquinaRESUMO
Stimulus-specific adaptation is a hallmark of sensory processing in which a repeated stimulus results in diminished successive neuronal responses, but a deviant stimulus will still elicit robust responses from the same neurons. Recent work has established that synaptically released zinc is an endogenous mechanism that shapes neuronal responses to sounds in the auditory cortex. Here, to understand the contributions of synaptic zinc to deviance detection of specific neurons, we performed wide-field and 2-photon calcium imaging of multiple classes of cortical neurons. We find that intratelencephalic (IT) neurons in both layers 2/3 and 5 as well as corticocollicular neurons in layer 5 all demonstrate deviance detection; however, we find a specific enhancement of deviance detection in corticocollicular neurons that arises from ZnT3-dependent synaptic zinc in layer 2/3 IT neurons. Genetic deletion of ZnT3 from layer 2/3 IT neurons removes the enhancing effects of synaptic zinc on corticocollicular neuron deviance detection and results in poorer acuity of detecting deviant sounds by behaving mice.
Assuntos
Córtex Auditivo , Neurônios , Sinapses , Zinco , Animais , Zinco/metabolismo , Córtex Auditivo/metabolismo , Córtex Auditivo/fisiologia , Camundongos , Sinapses/metabolismo , Sinapses/fisiologia , Neurônios/metabolismo , Neurônios/fisiologia , Proteínas de Transporte de Cátions/metabolismo , Proteínas de Transporte de Cátions/genética , Estimulação Acústica , Camundongos Knockout , Percepção Auditiva/fisiologia , Camundongos Endogâmicos C57BL , MasculinoRESUMO
Predictive coding is a fundamental function of the cortex. The predictive routing model proposes a neurophysiological implementation for predictive coding. Predictions are fed back from the deep-layer cortex via alpha/beta (8 to 30 Hz) oscillations. They inhibit the gamma (40 to 100 Hz) and spiking that feed sensory inputs forward. Unpredicted inputs arrive in circuits unprepared by alpha/beta, resulting in enhanced gamma and spiking. To test the predictive routing model and its role in consciousness, we collected data from intracranial recordings of macaque monkeys during passive presentation of auditory oddballs before and after propofol-mediated loss of consciousness (LOC). In line with the predictive routing model, alpha/beta oscillations in the awake state served to inhibit the processing of predictable stimuli. Propofol-mediated LOC eliminated alpha/beta modulation by a predictable stimulus in the sensory cortex and alpha/beta coherence between sensory and frontal areas. As a result, oddball stimuli evoked enhanced gamma power, late period (>200 ms from stimulus onset) spiking, and superficial layer sinks in the sensory cortex. LOC also resulted in diminished decodability of pattern-level prediction error signals in the higher-order cortex. Therefore, the auditory cortex was in a disinhibited state during propofol-mediated LOC. However, despite these enhanced feedforward responses in the auditory cortex, there was a loss of differential spiking to oddballs in the higher-order cortex. This may be a consequence of a loss of within-area and interareal spike-field coupling in the alpha/beta and gamma frequency bands. These results provide strong constraints for current theories of consciousness.
Assuntos
Propofol , Inconsciência , Propofol/farmacologia , Animais , Inconsciência/induzido quimicamente , Inconsciência/fisiopatologia , Macaca mulatta , Estado de Consciência/efeitos dos fármacos , Estado de Consciência/fisiologia , Córtex Auditivo/efeitos dos fármacos , Córtex Auditivo/fisiologia , Masculino , Anestésicos Intravenosos/farmacologia , Modelos Neurológicos , Neurônios/efeitos dos fármacos , Neurônios/fisiologia , Estimulação AcústicaRESUMO
Echolocating bats are among the most social and vocal of all mammals. These animals are ideal subjects for functional MRI (fMRI) studies of auditory social communication given their relatively hypertrophic limbic and auditory neural structures and their reduced ability to hear MRI gradient noise. Yet, no resting-state networks relevant to social cognition (e.g., default mode-like networks or DMLNs) have been identified in bats since there are few, if any, fMRI studies in the chiropteran order. Here, we acquired fMRI data at 7 Tesla from nine lightly anesthetized pale spear-nosed bats (Phyllostomus discolor). We applied independent components analysis (ICA) to reveal resting-state networks and measured neural activity elicited by noise ripples (on: 10 ms; off: 10 ms) that span this species' ultrasonic hearing range (20 to 130 kHz). Resting-state networks pervaded auditory, parietal, and occipital cortices, along with the hippocampus, cerebellum, basal ganglia, and auditory brainstem. Two midline networks formed an apparent DMLN. Additionally, we found four predominantly auditory/parietal cortical networks, of which two were left-lateralized and two right-lateralized. Regions within four auditory/parietal cortical networks are known to respond to social calls. Along with the auditory brainstem, regions within these four cortical networks responded to ultrasonic noise ripples. Iterative analyses revealed consistent, significant functional connectivity between the left, but not right, auditory/parietal cortical networks and DMLN nodes, especially the anterior-most cingulate cortex. Thus, a resting-state network implicated in social cognition displays more distributed functional connectivity across left, relative to right, hemispheric cortical substrates of audition and communication in this highly social and vocal species.
Assuntos
Córtex Auditivo , Quirópteros , Ecolocação , Imageamento por Ressonância Magnética , Animais , Quirópteros/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Ecolocação/fisiologia , Rede de Modo Padrão/fisiologia , Rede de Modo Padrão/diagnóstico por imagem , Masculino , Feminino , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagemRESUMO
How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.
Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Animais , Audição , Humanos , Música , FalaRESUMO
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Assuntos
Córtex Auditivo , Redes Neurais de Computação , Encéfalo , Audição , Percepção Auditiva/fisiologia , Ruído , Córtex Auditivo/fisiologiaRESUMO
Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.
Assuntos
Córtex Auditivo , Música , Humanos , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção Auditiva/fisiologia , Lobo Temporal/fisiologia , Estimulação AcústicaRESUMO
Substantial progress in the field of neuroscience has been made from anaesthetized preparations. Ketamine is one of the most used drugs in electrophysiology studies, but how ketamine affects neuronal responses is poorly understood. Here, we used in vivo electrophysiology and computational modelling to study how the auditory cortex of bats responds to vocalisations under anaesthesia and in wakefulness. In wakefulness, acoustic context increases neuronal discrimination of natural sounds. Neuron models predicted that ketamine affects the contextual discrimination of sounds regardless of the type of context heard by the animals (echolocation or communication sounds). However, empirical evidence showed that the predicted effect of ketamine occurs only if the acoustic context consists of low-pitched sounds (e.g., communication calls in bats). Using the empirical data, we updated the naïve models to show that differential effects of ketamine on cortical responses can be mediated by unbalanced changes in the firing rate of feedforward inputs to cortex, and changes in the depression of thalamo-cortical synaptic receptors. Combined, our findings obtained in vivo and in silico reveal the effects and mechanisms by which ketamine affects cortical responses to vocalisations.
Assuntos
Anestesia , Córtex Auditivo , Quirópteros , Ketamina , Animais , Córtex Auditivo/fisiologia , Estimulação Acústica , Ketamina/farmacologia , Quirópteros/fisiologia , Neurônios/fisiologia , Percepção Auditiva/fisiologiaRESUMO
Infant cries evoke powerful responses in parents1-4. Whether parental animals are intrinsically sensitive to neonatal vocalizations, or instead learn about vocal cues for parenting responses is unclear. In mice, pup-naive virgin females do not recognize the meaning of pup distress calls, but retrieve isolated pups to the nest after having been co-housed with a mother and litter5-9. Distress calls are variable, and require co-caring virgin mice to generalize across calls for reliable retrieval10,11. Here we show that the onset of maternal behaviour in mice results from interactions between intrinsic mechanisms and experience-dependent plasticity in the auditory cortex. In maternal females, calls with inter-syllable intervals (ISIs) from 75 to 375 milliseconds elicited pup retrieval, and cortical responses were generalized across these ISIs. By contrast, naive virgins were neuronally and behaviourally sensitized to the most common ('prototypical') ISIs. Inhibitory and excitatory neural responses were initially mismatched in the cortex of naive mice, with untuned inhibition and overly narrow excitation. During co-housing experiments, excitatory responses broadened to represent a wider range of ISIs, whereas inhibitory tuning sharpened to form a perceptual boundary. We presented synthetic calls during co-housing and observed that neurobehavioural responses adjusted to match these statistics, a process that required cortical activity and the hypothalamic oxytocin system. Neuroplastic mechanisms therefore build on an intrinsic sensitivity in the mouse auditory cortex, and enable rapid plasticity for reliable parenting behaviour.
Assuntos
Córtex Auditivo/fisiologia , Comportamento Materno/fisiologia , Plasticidade Neuronal/fisiologia , Estimulação Acústica , Animais , Córtex Auditivo/citologia , Potenciais Pós-Sinápticos Excitadores , Feminino , Abrigo para Animais , Comportamento Materno/psicologia , Camundongos , Inibição Neural/fisiologia , Ocitocina/metabolismo , Sinapses/metabolismo , Fatores de Tempo , Vocalização AnimalRESUMO
Coordinated functioning of the two cortical hemispheres is crucial for perception. The human auditory cortex (ACx) shows functional lateralization with the left hemisphere specialized for processing speech, whereas the right analyzes spectral content. In mice, virgin females demonstrate a left-hemisphere response bias to pup vocalizations that strengthens with motherhood. However, how this lateralized function is established is unclear. We developed a widefield imaging microscope to simultaneously image both hemispheres of mice to bilaterally monitor functional responses. We found that global ACx topography is symmetrical and stereotyped. In both male and virgin female mice, the secondary auditory cortex (A2) in the left hemisphere shows larger responses than right to high-frequency tones and adult vocalizations; however, only virgin female mice show a left-hemisphere bias in A2 in response to adult pain calls. These results indicate hemispheric bias with both sex-independent and -dependent aspects. Analyzing cross-hemispheric functional correlations showed that asymmetries exist in the strength of correlations between DM-AAF and A2-AAF, while other ACx areas showed smaller differences. We found that A2 showed lower cross-hemisphere correlation than other cortical areas, consistent with the lateralized functional activation of A2. Cross-hemispheric activity correlations are lower in deaf, otoferlin knockout (OTOF-/-) mice, indicating that the development of functional cross-hemispheric connections is experience dependent. Together, our results reveal that ACx is topographically symmetric at the macroscopic scale but that higher-order A2 shows sex-dependent and independent lateralized responses due to asymmetric intercortical functional connections. Moreover, our results suggest that sensory experience is required to establish functional cross-hemispheric connectivity.
Assuntos
Córtex Auditivo , Adulto , Masculino , Humanos , Feminino , Animais , Camundongos , Córtex Auditivo/fisiologia , Cálcio , Lateralidade Funcional/fisiologia , Mapeamento Encefálico , Microscopia , Percepção Auditiva/fisiologia , Proteínas de MembranaRESUMO
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.
Assuntos
Córtex Auditivo , Lobo Parietal , Animais , Lobo Parietal/fisiologia , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia , Estimulação Acústica , Acústica , GerbillinaeRESUMO
The key assumption of the predictive coding framework is that internal representations are used to generate predictions on how the sensory input will look like in the immediate future. These predictions are tested against the actual input by the so-called prediction error units, which encode the residuals of the predictions. What happens to prediction errors, however, if predictions drawn by different stages of the sensory hierarchy contradict each other? To answer this question, we conducted two fMRI experiments while female and male human participants listened to sequences of sounds: pure tones in the first experiment and frequency-modulated sweeps in the second experiment. In both experiments, we used repetition to induce predictions based on stimulus statistics (stats-informed predictions) and abstract rules disclosed in the task instructions to induce an orthogonal set of (task-informed) predictions. We tested three alternative scenarios: neural responses in the auditory sensory pathway encode prediction error with respect to (1) the stats-informed predictions, (2) the task-informed predictions, or (3) a combination of both. Results showed that neural populations in all recorded regions (bilateral inferior colliculus, medial geniculate body, and primary and secondary auditory cortices) encode prediction error with respect to a combination of the two orthogonal sets of predictions. The findings suggest that predictive coding exploits the non-linear architecture of the auditory pathway for the transmission of predictions. Such non-linear transmission of predictions might be crucial for the predictive coding of complex auditory signals like speech.Significance Statement Sensory systems exploit our subjective expectations to make sense of an overwhelming influx of sensory signals. It is still unclear how expectations at each stage of the processing pipeline are used to predict the representations at the other stages. The current view is that this transmission is hierarchical and linear. Here we measured fMRI responses in auditory cortex, sensory thalamus, and midbrain while we induced two sets of mutually inconsistent expectations on the sensory input, each putatively encoded at a different stage. We show that responses at all stages are concurrently shaped by both sets of expectations. The results challenge the hypothesis that expectations are transmitted linearly and provide for a normative explanation of the non-linear physiology of the corticofugal sensory system.