Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 7.581
Filter
1.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39087881

ABSTRACT

Perception integrates both sensory inputs and internal models of the environment. In the auditory domain, predictions play a critical role because of the temporal nature of sounds. However, the precise contribution of cortical and subcortical structures in these processes and their interaction remain unclear. It is also unclear whether these brain interactions are specific to abstract rules or if they also underlie the predictive coding of local features. We used high-field 7T functional magnetic resonance imaging to investigate interactions between cortical and subcortical areas during auditory predictive processing. Volunteers listened to tone sequences in an oddball paradigm where the predictability of the deviant was manipulated. Perturbations in periodicity were also introduced to test the specificity of the response. Results indicate that both cortical and subcortical auditory structures encode high-order predictive dynamics, with the effect of predictability being strongest in the auditory cortex. These predictive dynamics were best explained by modeling a top-down information flow, in contrast to unpredicted responses. No error signals were observed to deviations of periodicity, suggesting that these responses are specific to abstract rule violations. Our results support the idea that the high-order predictive dynamics observed in subcortical areas propagate from the auditory cortex.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Auditory Perception , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Brain Mapping/methods
2.
PLoS Biol ; 22(8): e3002732, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39133721

ABSTRACT

Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Music , Pleasure , Reward , Humans , Music/psychology , Male , Female , Pleasure/physiology , Adult , Auditory Perception/physiology , Young Adult , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Brain Mapping/methods , Brain/physiology , Brain/diagnostic imaging , Acoustic Stimulation , Nerve Net/physiology , Nerve Net/diagnostic imaging , Machine Learning
3.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39128941

ABSTRACT

High-frequency (>60 Hz) neuroelectric signals likely have functional roles distinct from low-frequency (<30 Hz) signals. While high-gamma activity (>60 Hz) does not simply equate to neuronal spiking, they are highly correlated, having similar information encoding. High-gamma activity is typically considered broadband and poorly phase-locked to sensory stimuli and thus is typically analyzed after transformations into absolute amplitude or spectral power. However, those analyses discard signal polarity, compromising the interpretation of neuroelectric events that are essentially dipolar. In the spectrotemporal profiles of field potentials in auditory cortex, we show high-frequency spectral peaks not phase-locked to sound onset, which follow the broadband peak of phase-locked onset responses. Isolating the signal components comprising the high-frequency peaks reveals narrow-band high-frequency oscillatory events, whose instantaneous frequency changes rapidly from >150 to 60 Hz, which may underlie broadband high-frequency spectral peaks in previous reports. The laminar amplitude distributions of the isolated activity had two peak positions, while the laminar phase patterns showed a counterphase relationship between those peaks, indicating the formation of dipoles. Our findings suggest that nonphase-locked HGA arises in part from oscillatory or recurring activity of supragranular-layer neuronal ensembles in auditory cortex.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Evoked Potentials, Auditory , Animals , Auditory Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Male , Electroencephalography , Macaca mulatta , Gamma Rhythm/physiology
4.
Trends Hear ; 28: 23312165241258056, 2024.
Article in English | MEDLINE | ID: mdl-39053892

ABSTRACT

This study investigated the morphology of the functional near-infrared spectroscopy (fNIRS) response to speech sounds measured from 16 sleeping infants and how it changes with repeated stimulus presentation. We observed a positive peak followed by a wide negative trough, with the latter being most evident in early epochs. We argue that the overall response morphology captures the effects of two simultaneous, but independent, response mechanisms that are both activated at the stimulus onset: one being the obligatory response to a sound stimulus by the auditory system, and the other being a neural suppression effect induced by the arousal system. Because the two effects behave differently with repeated epochs, it is possible to mathematically separate them and use fNIRS to study factors that affect the development and activation of the arousal system in infants. The results also imply that standard fNIRS analysis techniques need to be adjusted to take into account the possibilities of multiple simultaneous brain systems being activated and that the response to a stimulus is not necessarily stationary.


Subject(s)
Acoustic Stimulation , Arousal , Sleep , Spectroscopy, Near-Infrared , Humans , Spectroscopy, Near-Infrared/methods , Acoustic Stimulation/methods , Infant , Sleep/physiology , Female , Male , Arousal/physiology , Speech Perception/physiology , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Auditory Pathways/physiology , Brain Mapping/methods , Time Factors , Age Factors , Oxyhemoglobins/metabolism
5.
Nat Commun ; 15(1): 6023, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39019848

ABSTRACT

Neuronal responses during behavior are diverse, ranging from highly reliable 'classical' responses to irregular 'non-classically responsive' firing. While a continuum of response properties is observed across neural systems, little is known about the synaptic origins and contributions of diverse responses to network function, perception, and behavior. To capture the heterogeneous responses measured from auditory cortex of rodents performing a frequency recognition task, we use a novel task-performing spiking recurrent neural network incorporating spike-timing-dependent plasticity. Reliable and irregular units contribute differentially to task performance via output and recurrent connections, respectively. Excitatory plasticity shifts the response distribution while inhibition constrains its diversity. Together both improve task performance with full network engagement. The same local patterns of synaptic inputs predict spiking response properties of network units and auditory cortical neurons from in vivo whole-cell recordings during behavior. Thus, diverse neural responses contribute to network function and emerge from synaptic plasticity rules.


Subject(s)
Action Potentials , Auditory Cortex , Neuronal Plasticity , Neurons , Synapses , Animals , Neuronal Plasticity/physiology , Auditory Cortex/physiology , Auditory Cortex/cytology , Neurons/physiology , Action Potentials/physiology , Synapses/physiology , Rats , Nerve Net/physiology , Models, Neurological , Task Performance and Analysis
6.
Cereb Cortex ; 34(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39051660

ABSTRACT

What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.


Subject(s)
Acoustic Stimulation , Auditory Perception , Functional Laterality , Magnetic Resonance Imaging , Humans , Male , Female , Magnetic Resonance Imaging/methods , Functional Laterality/physiology , Adult , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Brain Mapping/methods , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging
7.
Curr Biol ; 34(15): 3354-3366.e6, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-38996534

ABSTRACT

Sensory perception is dynamic, quickly adapting to sudden shifts in environmental or behavioral context. Although decades of work have established that these dynamics are mediated by rapid fluctuations in sensory cortical activity, we have a limited understanding of the brain regions and pathways that orchestrate these changes. Neurons in the orbitofrontal cortex (OFC) encode contextual information, and recent data suggest that some of these signals are transmitted to sensory cortices. Whether and how these signals shape sensory encoding and perceptual sensitivity remain uncertain. Here, we asked whether the OFC mediates context-dependent changes in auditory cortical sensitivity and sound perception by monitoring and manipulating OFC activity in freely moving Mongolian gerbils of both sexes under two behavioral contexts: passive sound exposure and engagement in an amplitude modulation (AM) detection task. We found that the majority of OFC neurons, including the specific subset that innervates the auditory cortex, were strongly modulated by task engagement. Pharmacological inactivation of the OFC prevented rapid context-dependent changes in auditory cortical firing and significantly impaired behavioral AM detection. Our findings suggest that contextual information from the OFC mediates rapid plasticity in the auditory cortex and facilitates the perception of behaviorally relevant sounds.


Subject(s)
Auditory Cortex , Auditory Perception , Gerbillinae , Prefrontal Cortex , Animals , Gerbillinae/physiology , Auditory Perception/physiology , Auditory Cortex/physiology , Male , Prefrontal Cortex/physiology , Female , Acoustic Stimulation , Neurons/physiology
8.
Curr Biol ; 34(15): 3405-3415.e5, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39032492

ABSTRACT

A major challenge in neuroscience is to understand how neural representations of sensory information are transformed by the network of ascending and descending connections in each sensory system. By recording from neurons at several levels of the auditory pathway, we show that much of the nonlinear encoding of complex sounds in auditory cortex can be explained by transformations in the midbrain and thalamus. Modeling cortical neurons in terms of their inputs across these subcortical populations enables their responses to be predicted with unprecedented accuracy. By contrast, subcortical responses cannot be predicted from descending cortical inputs, indicating that ascending transformations are irreversible, resulting in increasingly lossy, higher-order representations across the auditory pathway. Rather, auditory cortex selectively modulates the nonlinear aspects of thalamic auditory responses and the functional coupling between subcortical neurons without affecting the linear encoding of sound. These findings reveal the fundamental role of subcortical transformations in shaping cortical responses.


Subject(s)
Auditory Cortex , Thalamus , Auditory Cortex/physiology , Animals , Thalamus/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Sound , Acoustic Stimulation , Models, Neurological , Mesencephalon/physiology , Neurons/physiology
9.
Sci Rep ; 14(1): 16799, 2024 07 22.
Article in English | MEDLINE | ID: mdl-39039107

ABSTRACT

The auditory steady state response (ASSR) arises when periodic sounds evoke stable responses in auditory networks that reflect the acoustic characteristics of the stimuli, such as the amplitude of the sound envelope. Larger for some stimulus rates than others, the ASSR in the human electroencephalogram (EEG) is notably maximal for sounds modulated in amplitude at 40 Hz. To investigate the local circuit underpinnings of the large ASSR to 40 Hz amplitude-modulated (AM) sounds, we acquired skull EEG and local field potential (LFP) recordings from primary auditory cortex (A1) in the rat during the presentation of 20, 30, 40, 50, and 80 Hz AM tones. 40 Hz AM tones elicited the largest ASSR from the EEG acquired above auditory cortex and the LFP acquired from each cortical layer in A1. The large ASSR in the EEG to 40 Hz AM tones was not due to larger instantaneous amplitude of the signals or to greater phase alignment of the LFP across the cortical layers. Instead, it resulted from decreased latency variability (or enhanced temporal consistency) of the 40 Hz response. Statistical models indicate the EEG signal was best predicted by LFPs in either the most superficial or deep cortical layers, suggesting deep layer coordinators of the ASSR. Overall, our results indicate that the recruitment of non-uniform but more temporally consistent responses across A1 layers underlie the larger ASSR to amplitude-modulated tones at 40 Hz.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Electroencephalography , Evoked Potentials, Auditory , Auditory Cortex/physiology , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Rats , Animals , Male , Auditory Perception/physiology , Humans
10.
Proc Natl Acad Sci U S A ; 121(27): e2306029121, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38913894

ABSTRACT

Echolocating bats are among the most social and vocal of all mammals. These animals are ideal subjects for functional MRI (fMRI) studies of auditory social communication given their relatively hypertrophic limbic and auditory neural structures and their reduced ability to hear MRI gradient noise. Yet, no resting-state networks relevant to social cognition (e.g., default mode-like networks or DMLNs) have been identified in bats since there are few, if any, fMRI studies in the chiropteran order. Here, we acquired fMRI data at 7 Tesla from nine lightly anesthetized pale spear-nosed bats (Phyllostomus discolor). We applied independent components analysis (ICA) to reveal resting-state networks and measured neural activity elicited by noise ripples (on: 10 ms; off: 10 ms) that span this species' ultrasonic hearing range (20 to 130 kHz). Resting-state networks pervaded auditory, parietal, and occipital cortices, along with the hippocampus, cerebellum, basal ganglia, and auditory brainstem. Two midline networks formed an apparent DMLN. Additionally, we found four predominantly auditory/parietal cortical networks, of which two were left-lateralized and two right-lateralized. Regions within four auditory/parietal cortical networks are known to respond to social calls. Along with the auditory brainstem, regions within these four cortical networks responded to ultrasonic noise ripples. Iterative analyses revealed consistent, significant functional connectivity between the left, but not right, auditory/parietal cortical networks and DMLN nodes, especially the anterior-most cingulate cortex. Thus, a resting-state network implicated in social cognition displays more distributed functional connectivity across left, relative to right, hemispheric cortical substrates of audition and communication in this highly social and vocal species.


Subject(s)
Auditory Cortex , Chiroptera , Echolocation , Magnetic Resonance Imaging , Animals , Chiroptera/physiology , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Echolocation/physiology , Default Mode Network/physiology , Default Mode Network/diagnostic imaging , Male , Female , Nerve Net/physiology , Nerve Net/diagnostic imaging
11.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38897817

ABSTRACT

Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.


Subject(s)
Auditory Cortex , Blindness , Magnetic Resonance Imaging , Visual Cortex , Humans , Blindness/physiopathology , Blindness/diagnostic imaging , Male , Adult , Female , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Cortex/physiopathology , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Young Adult , Neuronal Plasticity/physiology , Acoustic Stimulation , Brain Mapping , Middle Aged , Auditory Perception/physiology , Echolocation/physiology
12.
Cortex ; 177: 321-329, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38908362

ABSTRACT

A wealth of behavioral evidence indicate that sounds with increasing intensity (i.e. appear to be looming towards the listener) are processed with increased attentional and physiological resources compared to receding sounds. However, the neurophysiological mechanism responsible for such cognitive amplification remains elusive. Here, we show that the large differences seen between cortical responses to looming and receding sounds are in fact almost entirely explained away by nonlinear encoding at the level of the auditory periphery. We collected electroencephalography (EEG) data during an oddball paradigm to elicit mismatch negativity (MMN) and others Event Related Potentials (EPRs), in response to deviant stimuli with both dynamic (looming and receding) and constant level (flat) differences to the standard in the same participants. We then combined a computational model of the auditory periphery with generative EEG methods (temporal response functions, TRFs) to model the single-participant ERPs responses to flat deviants, and used them to predict the effect of the same mechanism on looming and receding stimuli. The flat model explained 45% variance of the looming response, and 33% of the receding response. This provide striking evidence that difference wave responses to looming and receding sounds result from the same cortical mechanism that generate responses to constant-level deviants: all such differences are the sole consequence of their particular physical morphology getting amplified and integrated by peripheral auditory mechanisms. Thus, not all effects seen cortically proceed from top-down modulations by high-level decision variables, but can rather be performed early and efficiently by feed-forward peripheral mechanisms that evolved precisely to sparing subsequent networks with the necessity to implement such mechanisms.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Auditory Perception , Electroencephalography , Evoked Potentials, Auditory , Humans , Female , Male , Auditory Perception/physiology , Adult , Evoked Potentials, Auditory/physiology , Young Adult , Auditory Cortex/physiology , Attention/physiology
13.
Commun Biol ; 7(1): 711, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862808

ABSTRACT

Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.


Subject(s)
Speech Perception , Humans , Male , Female , Adult , Young Adult , Speech Perception/physiology , Nerve Net/physiology , Auditory Cortex/physiology , Voice/physiology , Corpus Striatum/physiology
14.
J Neural Eng ; 21(4)2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38936398

ABSTRACT

Objective.Measures of functional connectivity (FC) can elucidate which cortical regions work together in order to complete a variety of behavioral tasks. This study's primary objective was to expand a previously published model of measuring FC to include multiple subjects and several regions of interest. While FC has been more extensively investigated in vision and other sensorimotor tasks, it is not as well understood in audition. The secondary objective of this study was to investigate how auditory regions are functionally connected to other cortical regions when attention is directed to different distinct auditory stimuli.Approach.This study implements a linear dynamic system (LDS) to measure the structured time-lagged dependence across several cortical regions in order to estimate their FC during a dual-stream auditory attention task.Results.The model's output shows consistent functionally connected regions across different listening conditions, indicative of an auditory attention network that engages regardless of endogenous switching of attention or different auditory cues being attended.Significance.The LDS implemented in this study implements a multivariate autoregression to infer FC across cortical regions during an auditory attention task. This study shows how a first-order autoregressive function can reliably measure functional connectivity from M/EEG data. Additionally, the study shows how auditory regions engage with the supramodal attention network outlined in the visual attention literature.


Subject(s)
Attention , Electroencephalography , Humans , Electroencephalography/methods , Male , Female , Attention/physiology , Adult , Acoustic Stimulation/methods , Young Adult , Linear Models , Auditory Perception/physiology , Auditory Cortex/physiology , Magnetoencephalography/methods , Nerve Net/physiology
15.
Neuroimage ; 297: 120675, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-38885886

ABSTRACT

The synchronization between the speech envelope and neural activity in auditory regions, referred to as cortical tracking of speech (CTS), plays a key role in speech processing. The method selected for extracting the envelope is a crucial step in CTS measurement, and the absence of a consensus on best practices among the various methods can influence analysis outcomes and interpretation. Here, we systematically compare five standard envelope extraction methods the absolute value of Hilbert transform (absHilbert), gammatone filterbanks, heuristic approach, Bark scale, and vocalic energy), analyzing their impact on the CTS. We present performance metrics for each method based on the recording of brain activity from participants listening to speech in clear and noisy conditions, utilizing intracranial EEG, MEG and EEG data. As expected, we observed significant CTS in temporal brain regions below 10 Hz across all datasets, regardless of the extraction methods. In general, the gammatone filterbanks approach consistently demonstrated superior performance compared to other methods. Results from our study can guide scientists in the field to make informed decisions about the optimal analysis to extract the CTS, contributing to advancing the understanding of the neuronal mechanisms implicated in CTS.


Subject(s)
Electroencephalography , Magnetoencephalography , Speech Perception , Humans , Speech Perception/physiology , Magnetoencephalography/methods , Electroencephalography/methods , Female , Adult , Male , Speech/physiology , Young Adult , Auditory Cortex/physiology , Electrocorticography/methods
16.
Brain Res ; 1841: 149091, 2024 Oct 15.
Article in English | MEDLINE | ID: mdl-38897535

ABSTRACT

Auditory neural networks in the brain naturally entrain to rhythmic stimuli. Such synchronization is an accessible index of local network performance as captured by EEG. Across species, click trains delivered âˆ¼ 40 Hz show strong entrainment with primary auditory cortex (Actx) being a principal source. Imaging studies have revealed additional cortical sources, but it is unclear if they are functionally distinct. Since auditory processing evolves hierarchically, we hypothesized that local synchrony would differ between between primary and association cortices. In female SD rats (N = 12), we recorded 40 Hz click train-elicited gamma oscillations using epidural electrodes situated at two distinct sites; one above the prefrontal cortex (PFC) and another above the Actx, after dosing with saline (1 ml/kg, sc) or the NMDA antagonist, MK801 (0.025, 0.05 or 0.1 mpk), in a blocked crossover design. Post-saline, both regions showed a strong 40 Hz auditory steady state response (ASSR). The latencies for the N1 response were âˆ¼ 16 ms (Actx) and âˆ¼ 34 ms (PFC). Narrow band (38-42 Hz) gamma oscillations appeared rapidly (<40 ms from stim onset at Actx but in a more delayed fashion (∼200 ms) at PFC. MK801 augmented gamma synchrony at Actx while dose-dependently disrupting at the PFC. Event-related gamma (but not beta) coherence, an index of long-distance connectivity, was disrupted by MK801. In conclusion, local network gamma synchrony in a higher order association cortex performs differently from that of the primary auditory cortex. We discuss these findings in the context of evolving sound processing across the cortical hierarchy.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Dizocilpine Maleate , Evoked Potentials, Auditory , Gamma Rhythm , Prefrontal Cortex , Rats, Sprague-Dawley , Animals , Prefrontal Cortex/physiology , Prefrontal Cortex/drug effects , Auditory Cortex/physiology , Auditory Cortex/drug effects , Female , Dizocilpine Maleate/pharmacology , Gamma Rhythm/drug effects , Gamma Rhythm/physiology , Acoustic Stimulation/methods , Evoked Potentials, Auditory/drug effects , Evoked Potentials, Auditory/physiology , Rats , Excitatory Amino Acid Antagonists/pharmacology , Auditory Perception/physiology , Auditory Perception/drug effects , Electroencephalography/methods
17.
Cell Rep ; 43(7): 114396, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-38923464

ABSTRACT

During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology, projection-pattern-specific recordings, and circuit perturbations to show that neurons in mouse secondary motor cortex (M2) encode sensation and are influenced by expectation. When a movement unexpectedly produces a sound, M2 becomes dominated by sound-evoked activity. Sound responses in M2 are inherited partially from the auditory cortex and are routed back to the auditory cortex, providing a path for the reciprocal exchange of sensory-motor information during behavior. When the acoustic consequences of a movement become predictable, M2 responses to self-generated sounds are selectively gated off. These changes in single-cell responses are reflected in population dynamics, which are influenced by both sensation and expectation. Together, these findings reveal the embedding of sensory and expectation signals in motor cortical activity.


Subject(s)
Motor Cortex , Animals , Motor Cortex/physiology , Mice , Auditory Cortex/physiology , Acoustic Stimulation , Sensation/physiology , Male , Mice, Inbred C57BL , Neurons/physiology , Female
18.
Behav Brain Funct ; 20(1): 17, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38943215

ABSTRACT

BACKGROUND: Left-handedness is a condition that reverses the typical left cerebral dominance of motor control to an atypical right dominance. The impact of this distinct control - and its associated neuroanatomical peculiarities - on other cognitive functions such as music processing or playing a musical instrument remains unexplored. Previous studies in right-handed population have linked musicianship to a larger volume in the (right) auditory cortex and a larger volume in the (right) arcuate fasciculus. RESULTS: In our study, we reveal that left-handed musicians (n = 55), in comparison to left-handed non-musicians (n = 75), exhibit a larger gray matter volume in both the left and right Heschl's gyrus, critical for auditory processing. They also present a higher number of streamlines across the anterior segment of the right arcuate fasciculus. Importantly, atypical hemispheric lateralization of speech (notably prevalent among left-handers) was associated to a rightward asymmetry of the AF, in contrast to the leftward asymmetry exhibited by the typically lateralized. CONCLUSIONS: These findings suggest that left-handed musicians share similar neuroanatomical characteristics with their right-handed counterparts. However, atypical lateralization of speech might potentiate the right audiomotor pathway, which has been associated with musicianship and better musical skills. This may help explain why musicians are more prevalent among left-handers and shed light on their cognitive advantages.


Subject(s)
Functional Laterality , Music , Humans , Male , Functional Laterality/physiology , Female , Adult , Young Adult , Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Magnetic Resonance Imaging , Gray Matter/anatomy & histology , Gray Matter/diagnostic imaging , Auditory Perception/physiology , Brain/anatomy & histology , Brain/physiology
19.
J Neurophysiol ; 132(1): 45-53, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38810366

ABSTRACT

Psilocybin is a serotonergic psychedelic believed to have therapeutic potential for neuropsychiatric conditions. Despite well-documented prevalence of perceptual alterations, hallucinations, and synesthesia associated with psychedelic experiences, little is known about how psilocybin affects sensory cortex or alters the activity of neurons in awake animals. To investigate, we conducted two-photon imaging experiments in auditory cortex of awake mice and collected video of free-roaming mouse behavior, both at baseline and during psilocybin treatment. In comparison with pre-dose neural activity, a 2 mg/kg ip dose of psilocybin initially increased the amplitude of neural responses to sound. Thirty minutes post-dose, behavioral activity and neural response amplitudes decreased, yet functional connectivity increased. In contrast, control mice given intraperitoneal saline injections showed no significant changes in either neural or behavioral activity across conditions. Notably, neuronal stimulus selectivity remained stable during psilocybin treatment, for both tonotopic cortical maps and single-cell pure-tone frequency tuning curves. Our results mirror similar findings regarding the effects of serotonergic psychedelics in visual cortex and suggest that psilocybin modulates the balance of intrinsic versus stimulus-driven influences on neural activity in auditory cortex.NEW & NOTEWORTHY Recent studies have shown promising therapeutic potential for psychedelics in treating neuropsychiatric conditions. Musical experience during psilocybin-assisted therapy is predictive of treatment outcome, yet little is known about how psilocybin affects auditory processing. Here, we conducted two-photon imaging experiments in auditory cortex of awake mice that received a dose of psilocybin. Our results suggest that psilocybin modulates the roles of intrinsic neural activity versus stimulus-driven influences on auditory perception.


Subject(s)
Auditory Cortex , Hallucinogens , Psilocybin , Animals , Auditory Cortex/drug effects , Auditory Cortex/physiology , Mice , Psilocybin/pharmacology , Psilocybin/administration & dosage , Hallucinogens/pharmacology , Hallucinogens/administration & dosage , Male , Mice, Inbred C57BL , Neurons/drug effects , Neurons/physiology , Auditory Perception/drug effects , Auditory Perception/physiology , Acoustic Stimulation
20.
Nat Commun ; 15(1): 4313, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38773109

ABSTRACT

Our brain is constantly extracting, predicting, and recognising key spatiotemporal features of the physical world in order to survive. While neural processing of visuospatial patterns has been extensively studied, the hierarchical brain mechanisms underlying conscious recognition of auditory sequences and the associated prediction errors remain elusive. Using magnetoencephalography (MEG), we describe the brain functioning of 83 participants during recognition of previously memorised musical sequences and systematic variations. The results show feedforward connections originating from auditory cortices, and extending to the hippocampus, anterior cingulate gyrus, and medial cingulate gyrus. Simultaneously, we observe backward connections operating in the opposite direction. Throughout the sequences, the hippocampus and cingulate gyrus maintain the same hierarchical level, except for the final tone, where the cingulate gyrus assumes the top position within the hierarchy. The evoked responses of memorised sequences and variations engage the same hierarchical brain network but systematically differ in terms of temporal dynamics, strength, and polarity. Furthermore, induced-response analysis shows that alpha and beta power is stronger for the variations, while gamma power is enhanced for the memorised sequences. This study expands on the predictive coding theory by providing quantitative evidence of hierarchical brain mechanisms during conscious memory and predictive processing of auditory sequences.


Subject(s)
Auditory Cortex , Auditory Pathways , Gyrus Cinguli , Hippocampus , Memory , Humans , Music , Magnetoencephalography , Multivariate Analysis , Pattern Recognition, Physiological , Auditory Cortex/physiology , Gyrus Cinguli/physiology , Hippocampus/physiology , Prefrontal Cortex/physiology , Evoked Potentials, Auditory , Male , Female , Adult , Middle Aged , Auditory Perception
SELECTION OF CITATIONS
SEARCH DETAIL