Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32.846
Filter
Add more filters

Publication year range
1.
Cell ; 177(2): 256-271.e22, 2019 04 04.
Article in English | MEDLINE | ID: mdl-30879788

ABSTRACT

We previously reported that inducing gamma oscillations with a non-invasive light flicker (gamma entrainment using sensory stimulus or GENUS) impacted pathology in the visual cortex of Alzheimer's disease mouse models. Here, we designed auditory tone stimulation that drove gamma frequency neural activity in auditory cortex (AC) and hippocampal CA1. Seven days of auditory GENUS improved spatial and recognition memory and reduced amyloid in AC and hippocampus of 5XFAD mice. Changes in activation responses were evident in microglia, astrocytes, and vasculature. Auditory GENUS also reduced phosphorylated tau in the P301S tauopathy model. Furthermore, combined auditory and visual GENUS, but not either alone, produced microglial-clustering responses, and decreased amyloid in medial prefrontal cortex. Whole brain analysis using SHIELD revealed widespread reduction of amyloid plaques throughout neocortex after multi-sensory GENUS. Thus, GENUS can be achieved through multiple sensory modalities with wide-ranging effects across multiple brain areas to improve cognitive function.


Subject(s)
Acoustic Stimulation/methods , Alzheimer Disease/therapy , Cognition/physiology , Alzheimer Disease/pathology , Amyloid/metabolism , Amyloid beta-Peptides/metabolism , Animals , Auditory Perception/physiology , Brain/metabolism , Disease Models, Animal , Gamma Rhythm/physiology , Hippocampus/metabolism , Male , Mice , Mice, Inbred C57BL , Microglia/metabolism , Plaque, Amyloid/metabolism
2.
Physiol Rev ; 103(2): 1025-1058, 2023 04 01.
Article in English | MEDLINE | ID: mdl-36049112

ABSTRACT

Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.


Subject(s)
Auditory Cortex , Animals , Acoustic Stimulation , Auditory Cortex/physiology , Auditory Perception/physiology , Noise , Adaptation, Physiological/physiology
3.
Annu Rev Neurosci ; 44: 449-473, 2021 07 08.
Article in English | MEDLINE | ID: mdl-33882258

ABSTRACT

Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.


Subject(s)
Attention , Auditory Perception , Brain , Learning , Visual Perception
4.
Nat Rev Neurosci ; 24(11): 711-722, 2023 11.
Article in English | MEDLINE | ID: mdl-37783820

ABSTRACT

Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.


Subject(s)
Auditory Cortex , Humans , Auditory Perception/physiology , Speech/physiology , Brain/physiology , Acoustic Stimulation
5.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Article in English | MEDLINE | ID: mdl-30699049

ABSTRACT

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Subject(s)
Auditory Perception/physiology , Cochlear Implants , Critical Period, Psychological , Language Development , Animals , Auditory Perceptual Disorders/etiology , Brain/growth & development , Cochlear Implantation , Comprehension , Cues , Deafness/congenital , Deafness/physiopathology , Deafness/psychology , Deafness/surgery , Equipment Design , Humans , Language Development Disorders/etiology , Language Development Disorders/prevention & control , Learning/physiology , Neuronal Plasticity , Photic Stimulation
6.
Nat Rev Neurosci ; 23(5): 287-305, 2022 05.
Article in English | MEDLINE | ID: mdl-35352057

ABSTRACT

Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.


Subject(s)
Music , Auditory Perception , Brain , Emotions , Humans , Learning , Music/psychology
7.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article in English | MEDLINE | ID: mdl-38805517

ABSTRACT

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Subject(s)
Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
8.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38319934

ABSTRACT

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Subject(s)
Brain Mapping , Visual Perception , Humans , Aged , Bayes Theorem , Visual Perception/physiology , Brain/physiology , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Photic Stimulation/methods , Magnetic Resonance Imaging
9.
PLoS Biol ; 22(8): e3002732, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39133721

ABSTRACT

Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Music , Pleasure , Reward , Humans , Music/psychology , Male , Female , Pleasure/physiology , Adult , Auditory Perception/physiology , Young Adult , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Brain Mapping/methods , Brain/physiology , Brain/diagnostic imaging , Acoustic Stimulation , Nerve Net/physiology , Nerve Net/diagnostic imaging , Machine Learning
10.
PLoS Biol ; 22(6): e3002665, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38935589

ABSTRACT

Loss of synapses between spiral ganglion neurons and inner hair cells (IHC synaptopathy) leads to an auditory neuropathy called hidden hearing loss (HHL) characterized by normal auditory thresholds but reduced amplitude of sound-evoked auditory potentials. It has been proposed that synaptopathy and HHL result in poor performance in challenging hearing tasks despite a normal audiogram. However, this has only been tested in animals after exposure to noise or ototoxic drugs, which can cause deficits beyond synaptopathy. Furthermore, the impact of supernumerary synapses on auditory processing has not been evaluated. Here, we studied mice in which IHC synapse counts were increased or decreased by altering neurotrophin 3 (Ntf3) expression in IHC supporting cells. As we previously showed, postnatal Ntf3 knockdown or overexpression reduces or increases, respectively, IHC synapse density and suprathreshold amplitude of sound-evoked auditory potentials without changing cochlear thresholds. We now show that IHC synapse density does not influence the magnitude of the acoustic startle reflex or its prepulse inhibition. In contrast, gap-prepulse inhibition, a behavioral test for auditory temporal processing, is reduced or enhanced according to Ntf3 expression levels. These results indicate that IHC synaptopathy causes temporal processing deficits predicted in HHL. Furthermore, the improvement in temporal acuity achieved by increasing Ntf3 expression and synapse density suggests a therapeutic strategy for improving hearing in noise for individuals with synaptopathy of various etiologies.


Subject(s)
Hair Cells, Auditory, Inner , Neurotrophin 3 , Synapses , Animals , Hair Cells, Auditory, Inner/metabolism , Hair Cells, Auditory, Inner/pathology , Synapses/metabolism , Synapses/physiology , Neurotrophin 3/metabolism , Neurotrophin 3/genetics , Mice , Auditory Threshold , Evoked Potentials, Auditory/physiology , Reflex, Startle/physiology , Auditory Perception/physiology , Spiral Ganglion/metabolism , Female , Male , Hearing Loss, Hidden
11.
Cell ; 151(1): 41-55, 2012 Sep 28.
Article in English | MEDLINE | ID: mdl-23021214

ABSTRACT

Natural sensory input shapes both structure and function of developing neurons, but how early experience-driven morphological and physiological plasticity are interrelated remains unclear. Using rapid time-lapse two-photon calcium imaging of network activity and single-neuron growth within the unanesthetized developing brain, we demonstrate that visual stimulation induces coordinated changes to neuronal responses and dendritogenesis. Further, we identify the transcription factor MEF2A/2D as a major regulator of neuronal response to plasticity-inducing stimuli directing both structural and functional changes. Unpatterned sensory stimuli that change plasticity thresholds induce rapid degradation of MEF2A/2D through a classical apoptotic pathway requiring NMDA receptors and caspases-9 and -3/7. Knockdown of MEF2A/2D alone is sufficient to induce a metaplastic shift in threshold of both functional and morphological plasticity. These findings demonstrate how sensory experience acting through altered levels of the transcription factor MEF2 fine-tunes the plasticity thresholds of brain neurons during neural circuit formation.


Subject(s)
Brain/embryology , Myogenic Regulatory Factors/metabolism , Neuronal Plasticity , Transcription Factors/metabolism , Xenopus Proteins/metabolism , Xenopus laevis/embryology , Animals , Auditory Perception , Brain/cytology , Caspases/metabolism , MEF2 Transcription Factors , Neurons/metabolism , Receptors, N-Methyl-D-Aspartate/metabolism , Sound , Visual Perception
12.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38889147

ABSTRACT

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Subject(s)
Auditory Perception , Brain , Electroencephalography , Voice , Humans , Male , Female , Voice/physiology , Adult , Brain/physiology , Auditory Perception/physiology , Young Adult , Social Perception
13.
Proc Natl Acad Sci U S A ; 121(5): e2308859121, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38271338

ABSTRACT

Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.


Subject(s)
Music , Humans , Music/psychology , Sensation , Cross-Cultural Comparison , Acoustics , Emotions , Auditory Perception
14.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39008675

ABSTRACT

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Subject(s)
Auditory Perception , Emotions , Magnetic Resonance Imaging , Music , Visual Perception , Humans , Music/psychology , Female , Male , Adult , Visual Perception/physiology , Auditory Perception/physiology , Emotions/physiology , Young Adult , Brain Mapping , Acoustic Stimulation , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Primary Visual Cortex/physiology , Photic Stimulation/methods
15.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38861607

ABSTRACT

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Neurons , Vocalization, Animal , Voice , Animals , Humans , Neurons/physiology , Voice/physiology , Magnetic Resonance Imaging/methods , Vocalization, Animal/physiology , Auditory Perception/physiology , Male , Macaca mulatta , Brain/physiology , Acoustic Stimulation , Brain Mapping/methods
16.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38408255

ABSTRACT

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Subject(s)
Music , Music/psychology , Brain/physiology , Emotions/physiology , Amygdala/physiology , Affect , Magnetic Resonance Imaging , Auditory Perception/physiology
17.
Proc Natl Acad Sci U S A ; 121(24): e2311570121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38830095

ABSTRACT

Even a transient period of hearing loss during the developmental critical period can induce long-lasting deficits in temporal and spectral perception. These perceptual deficits correlate with speech perception in humans. In gerbils, these hearing loss-induced perceptual deficits are correlated with a reduction of both ionotropic GABAA and metabotropic GABAB receptor-mediated synaptic inhibition in auditory cortex, but most research on critical period plasticity has focused on GABAA receptors. Therefore, we developed viral vectors to express proteins that would upregulate gerbil postsynaptic inhibitory receptor subunits (GABAA, Gabra1; GABAB, Gabbr1b) in pyramidal neurons, and an enzyme that mediates GABA synthesis (GAD65) presynaptically in parvalbumin-expressing interneurons. A transient period of developmental hearing loss during the auditory critical period significantly impaired perceptual performance on two auditory tasks: amplitude modulation depth detection and spectral modulation depth detection. We then tested the capacity of each vector to restore perceptual performance on these auditory tasks. While both GABA receptor vectors increased the amplitude of cortical inhibitory postsynaptic potentials, only viral expression of postsynaptic GABAB receptors improved perceptual thresholds to control levels. Similarly, presynaptic GAD65 expression improved perceptual performance on spectral modulation detection. These findings suggest that recovering performance on auditory perceptual tasks depends on GABAB receptor-dependent transmission at the auditory cortex parvalbumin to pyramidal synapse and point to potential therapeutic targets for developmental sensory disorders.


Subject(s)
Auditory Cortex , Gerbillinae , Hearing Loss , Animals , Auditory Cortex/metabolism , Auditory Cortex/physiopathology , Hearing Loss/genetics , Hearing Loss/physiopathology , Receptors, GABA-B/metabolism , Receptors, GABA-B/genetics , Glutamate Decarboxylase/metabolism , Glutamate Decarboxylase/genetics , Receptors, GABA-A/metabolism , Receptors, GABA-A/genetics , Parvalbumins/metabolism , Parvalbumins/genetics , Auditory Perception/physiology , Pyramidal Cells/metabolism , Pyramidal Cells/physiology , Genetic Vectors/genetics
18.
Annu Rev Neurosci ; 41: 527-552, 2018 07 08.
Article in English | MEDLINE | ID: mdl-29986161

ABSTRACT

How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping , Animals , Hearing , Humans , Music , Speech
19.
PLoS Biol ; 21(8): e3002239, 2023 08.
Article in English | MEDLINE | ID: mdl-37651504

ABSTRACT

Understanding central auditory processing critically depends on defining underlying auditory cortical networks and their relationship to the rest of the brain. We addressed these questions using resting state functional connectivity derived from human intracranial electroencephalography. Mapping recording sites into a low-dimensional space where proximity represents functional similarity revealed a hierarchical organization. At a fine scale, a group of auditory cortical regions excluded several higher-order auditory areas and segregated maximally from the prefrontal cortex. On mesoscale, the proximity of limbic structures to the auditory cortex suggested a limbic stream that parallels the classically described ventral and dorsal auditory processing streams. Identities of global hubs in anterior temporal and cingulate cortex depended on frequency band, consistent with diverse roles in semantic and cognitive processing. On a macroscale, observed hemispheric asymmetries were not specific for speech and language networks. This approach can be applied to multivariate brain data with respect to development, behavior, and disorders.


Subject(s)
Auditory Cortex , Humans , Auditory Perception , Brain , Electrocorticography , Electrophysiology
20.
PLoS Biol ; 21(12): e3002366, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38091351

ABSTRACT

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.


Subject(s)
Auditory Cortex , Neural Networks, Computer , Brain , Hearing , Auditory Perception/physiology , Noise , Auditory Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL