Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 164
Filter
1.
bioRxiv ; 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39314420

ABSTRACT

Intrinsic uncertainty in the reward environment requires the brain to run multiple models simultaneously to predict outcomes based on preceding cues or actions, commonly referred to as stimulus- and action-based learning. Ultimately, the brain also must adopt appropriate choice behavior using reliability of these models. Here, we combined multiple experimental and computational approaches to quantify concurrent learning in monkeys performing tasks with different levels of uncertainty about the model of the environment. By comparing behavior in control monkeys and monkeys with bilateral lesions to the amygdala or ventral striatum, we found evidence for dynamic, competitive interaction between stimulus-based and action-based learning, and for a distinct role of the amygdala. Specifically, we demonstrate that the amygdala adjusts the initial balance between the two learning systems, thereby altering the interaction between arbitration and learning that shapes the time course of both learning and choice behaviors. This novel role of the amygdala can account for existing contradictory observations and provides testable predictions for future studies into circuit-level mechanisms of flexible learning and choice under uncertainty.

2.
bioRxiv ; 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39091835

ABSTRACT

In recent years, we and others have identified a number of enhancers that, when incorporated into rAAV vectors, can restrict the transgene expression to particular neuronal populations. Yet, viral tools to access and manipulate fine neuronal subtypes are still limited. Here, we performed systematic analysis of single cell genomic data to identify enhancer candidates for each of the cortical interneuron subtypes. We established a set of enhancer-AAV tools that are highly specific for distinct cortical interneuron populations and striatal cholinergic neurons. These enhancers, when used in the context of different effectors, can target (fluorescent proteins), observe activity (GCaMP) and manipulate (opto- or chemo-genetics) specific neuronal subtypes. We also validated our enhancer-AAV tools across species. Thus, we provide the field with a powerful set of tools to study neural circuits and functions and to develop precise and targeted therapy.

3.
Res Sq ; 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39149491

ABSTRACT

Cholinergic projection neurons of the nucleus basalis and substantia innominata (NBM/SI) densely innervate the basolateral amygdala (BLA) and have been shown to contribute to the encoding of fundamental and life-threatening experiences. Given the vital importance of these circuits in the acquisition and retention of memories that are essential for survival in a changing environment, it is not surprising that the basic anatomical organization of the NBM/SI is well conserved across animal classes as diverse as teleost and mammal. What is not known is the extent to which the physiology and morphology of NBM/SI neurons have also been conserved. To address this issue, we made patch-clamp recordings from NBM/SI neurons in ex vivo slices of two widely divergent mammalian species, mouse and rhesus macaque, focusing our efforts on cholinergic neurons that project to the BLA. We then reconstructed most of these recorded neurons post hoc to characterize neuronal morphology. We found that rhesus macaque BLA-projecting cholinergic neurons were both more intrinsically excitable and less morphologically compact than their mouse homologs. Combining measurements of 18 physiological features and 13 morphological features, we illustrate the extent of the separation. Although macaque and mouse neurons both exhibited considerable within-group diversity and overlapped with each other on multiple individual metrics, a combined morpho-electric analysis demonstrates that they form two distinct neuronal classes. Given the shared purpose of the circuits in which these neurons participate, this finding raises questions about (and offers constraints on) how these distinct classes result in similar behavior.

4.
Sci Rep ; 14(1): 12985, 2024 06 06.
Article in English | MEDLINE | ID: mdl-38839828

ABSTRACT

One third of people with psychosis become antipsychotic treatment-resistant and the underlying mechanisms remain unclear. We investigated whether altered cognitive control function is a factor underlying development of treatment resistance. We studied 50 people with early psychosis at a baseline visit (mean < 2 years illness duration) and follow-up visit (1 year later), when 35 were categorized at treatment-responsive and 15 as treatment-resistant. Participants completed an emotion-yoked reward learning task that requires cognitive control whilst undergoing fMRI and MR spectroscopy to measure glutamate levels from Anterior Cingulate Cortex (ACC). Changes in cognitive control related activity (in prefrontal cortex and ACC) over time were compared between treatment-resistant and treatment-responsive groups and related to glutamate. Compared to treatment-responsive, treatment-resistant participants showed blunted activity in right amygdala (decision phase) and left pallidum (feedback phase) at baseline which increased over time and was accompanied by a decrease in medial Prefrontal Cortex (mPFC) activity (feedback phase) over time. Treatment-responsive participants showed a negative relationship between mPFC activity and glutamate levels at follow-up, no such relationship existed in treatment-resistant participants. Reduced activity in right amygdala and left pallidum at baseline was predictive of treatment resistance at follow-up (67% sensitivity, 94% specificity). The findings suggest that deterioration in mPFC function over time, a key cognitive control region needed to compensate for an initial dysfunction within a social-emotional network, is a factor underlying development of treatment resistance in early psychosis. An uncoupling between glutamate and cognitive control related mPFC function requires further investigation that may present a future target for interventions.


Subject(s)
Cognition , Magnetic Resonance Imaging , Prefrontal Cortex , Psychotic Disorders , Humans , Prefrontal Cortex/metabolism , Prefrontal Cortex/physiopathology , Prefrontal Cortex/diagnostic imaging , Male , Female , Psychotic Disorders/metabolism , Psychotic Disorders/drug therapy , Psychotic Disorders/physiopathology , Adult , Young Adult , Glutamic Acid/metabolism , Antipsychotic Agents/therapeutic use , Antipsychotic Agents/pharmacology , Gyrus Cinguli/metabolism , Gyrus Cinguli/diagnostic imaging , Gyrus Cinguli/physiopathology
5.
J Neurosci ; 44(24)2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38670805

ABSTRACT

Reinforcement learning is a theoretical framework that describes how agents learn to select options that maximize rewards and minimize punishments over time. We often make choices, however, to obtain symbolic reinforcers (e.g., money, points) that are later exchanged for primary reinforcers (e.g., food, drink). Although symbolic reinforcers are ubiquitous in our daily lives, widely used in laboratory tasks because they can be motivating, mechanisms by which they become motivating are less understood. In the present study, we examined how monkeys learn to make choices that maximize fluid rewards through reinforcement with tokens. The question addressed here is how the value of a state, which is a function of multiple task features (e.g., the current number of accumulated tokens, choice options, task epoch, trials since the last delivery of primary reinforcer, etc.), drives value and affects motivation. We constructed a Markov decision process model that computes the value of task states given task features to then correlate with the motivational state of the animal. Fixation times, choice reaction times, and abort frequency were all significantly related to values of task states during the tokens task (n = 5 monkeys, three males and two females). Furthermore, the model makes predictions for how neural responses could change on a moment-by-moment basis relative to changes in the state value. Together, this task and model allow us to capture learning and behavior related to symbolic reinforcement.


Subject(s)
Choice Behavior , Macaca mulatta , Motivation , Reinforcement, Psychology , Reward , Animals , Motivation/physiology , Male , Choice Behavior/physiology , Reaction Time/physiology , Markov Chains , Female
6.
bioRxiv ; 2024 May 07.
Article in English | MEDLINE | ID: mdl-38617219

ABSTRACT

Reinforcement learning (RL), particularly in primates, is often driven by symbolic outcomes. However, it is usually studied with primary reinforcers. To examine the neural mechanisms underlying learning from symbolic outcomes, we trained monkeys on a task in which they learned to choose options that led to gains of tokens and avoid choosing options that led to losses of tokens. We then recorded simultaneously from the orbitofrontal cortex (OFC), ventral striatum (VS), amygdala (AMY), and the mediodorsal thalamus (MDt). We found that the OFC played a dominant role in coding token outcomes and token prediction errors. The other areas contributed complementary functions with the VS coding appetitive outcomes and the AMY coding the salience of outcomes. The MDt coded actions and relayed information about tokens between the OFC and VS. Thus, OFC leads the process of symbolic reinforcement learning in the ventral frontostriatal circuitry.

7.
bioRxiv ; 2024 Jan 21.
Article in English | MEDLINE | ID: mdl-38313283

ABSTRACT

Opioid receptors within the CNS regulate pain sensation and mood and are key targets for drugs of abuse. Within the adult rodent hippocampus (HPC), µ-opioid receptor agonists suppress inhibitory parvalbumin-expressing interneurons (PV-INs), thus disinhibiting the circuit. However, it is uncertain if this disinhibitory motif is conserved in other cortical regions, species, or across development. We observed that PV-IN mediated inhibition is robustly suppressed by opioids in HPC but not neocortex in mice and nonhuman primates, with spontaneous inhibitory tone in resected human tissue also following a consistent dichotomy. This hippocampal disinhibitory motif was established in early development when immature PV-INs and opioids already influence primordial network rhythmogenesis. Acute opioid-mediated modulation was partially occluded with morphine pretreatment, with implications for the effects of opioids on hippocampal network activity during circuit maturation as well as learning and memory. Together, these findings demonstrate that PV-INs exhibit a divergence in opioid sensitivity across brain regions that is remarkably conserved across evolution and highlights the underappreciated role of opioids acting through immature PV-INs in shaping hippocampal development.

8.
J Neurosci ; 44(5)2024 01 31.
Article in English | MEDLINE | ID: mdl-38296647

ABSTRACT

Deciding whether to forego immediate rewards or explore new opportunities is a key component of flexible behavior and is critical for the survival of the species. Although previous studies have shown that different cortical and subcortical areas, including the amygdala and ventral striatum (VS), are implicated in representing the immediate (exploitative) and future (explorative) value of choices, the effect of the motor system used to make choices has not been examined. Here, we tested male rhesus macaques with amygdala or VS lesions on two versions of a three-arm bandit task where choices were registered with either a saccade or an arm movement. In both tasks we presented the monkeys with explore-exploit tradeoffs by periodically replacing familiar options with novel options that had unknown reward probabilities. We found that monkeys explored more with saccades but showed better learning with arm movements. VS lesions caused the monkeys to be more explorative with arm movements and less explorative with saccades, although this may have been due to an overall decrease in performance. VS lesions affected the monkeys' ability to learn novel stimulus-reward associations in both tasks, while after amygdala lesions this effect was stronger when choices were made with saccades. Further, on average, VS and amygdala lesions reduced the monkeys' ability to choose better options only when choices were made with a saccade. These results show that learning reward value associations to manage explore-exploit behaviors is motor system dependent and they further define the contributions of amygdala and VS to reinforcement learning.


Subject(s)
Choice Behavior , Ventral Striatum , Animals , Male , Macaca mulatta , Reinforcement, Psychology , Amygdala , Reward
9.
bioRxiv ; 2023 Sep 17.
Article in English | MEDLINE | ID: mdl-37886489

ABSTRACT

Decisions are made with different degrees of consistency, and this consistency can be linked to the confidence that the best choice has been made. Theoretical work suggests that attractor dynamics in networks can account for choice consistency, but how this is implemented in the brain remains unclear. Here, we provide evidence that the energy landscape around attractor basins in population neural activity in prefrontal cortex reflects choice consistency. We trained two rhesus monkeys to make accept/reject decisions based on pretrained visual cues that signaled reward offers with different magnitudes and delays-to-reward. Monkeys made consistent decisions for very good and very bad offers, but decisions were less consistent for intermediate offers. Analysis of neural data showed that the attractor basins around patterns of activity reflecting decisions had steeper landscapes for offers that led to consistent decisions. Therefore, we provide neural evidence that energy landscapes predict decision consistency, which reflects decision confidence.

10.
J Neurosci ; 43(50): 8723-8732, 2023 12 13.
Article in English | MEDLINE | ID: mdl-37848282

ABSTRACT

Adolescence is an important developmental period, during which substantial changes occur in brain function and behavior. Several aspects of executive function, including response inhibition, improve during this period. Correspondingly, structural imaging studies have documented consistent decreases in cortical and subcortical gray matter volume, and postmortem histologic studies have found substantial (∼40%) decreases in excitatory synapses in prefrontal cortex. Recent computational modeling work suggests that the change in synaptic density underlie improvements in task performance. These models also predict changes in neural dynamics related to the depth of attractor basins, where deeper basins can underlie better task performance. In this study, we analyzed task-related neural dynamics in a large cohort of longitudinally followed subjects (male and female) spanning early to late adolescence. We found that age correlated positively with behavioral performance in the Eriksen Flanker task. Older subjects were also characterized by deeper attractor basins around task related evoked EEG potentials during specific cognitive operations. Thus, consistent with computational models examining the effects of excitatory synaptic pruning, older adolescents showed stronger attractor dynamics during task performance.SIGNIFICANCE STATEMENT There are well-documented changes in brain and behavior during adolescent development. However, there are few mechanistic theories that link changes in the brain to changes in behavior. Here, we tested a hypothesis, put forward on the basis of computational modeling, that pruning of excitatory synapses in cortex during adolescence changes neural dynamics. We found, consistent with the hypothesis, that variability around event-related potentials shows faster decay dynamics in older adolescent subjects. The faster decay dynamics are consistent with the hypothesis that synaptic pruning during adolescent development leads to stronger attractor basins in task-related neural activity.


Subject(s)
Adolescent Development , Brain , Adolescent , Humans , Male , Female , Aged , Brain/physiology , Prefrontal Cortex , Executive Function , Gray Matter
11.
Nat Neurosci ; 26(11): 1970-1980, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37798412

ABSTRACT

Decisions are made with different degrees of consistency, and this consistency can be linked to the confidence that the best choice has been made. Theoretical work suggests that attractor dynamics in networks can account for choice consistency, but how this is implemented in the brain remains unclear. Here we provide evidence that the energy landscape around attractor basins in population neural activity in the prefrontal cortex reflects choice consistency. We trained two rhesus monkeys to make accept/reject decisions based on pretrained visual cues that signaled reward offers with different magnitudes and delays to reward. Monkeys made consistent decisions for very good and very bad offers, but decisions were less consistent for intermediate offers. Analysis of neural data showed that the attractor basins around patterns of activity reflecting decisions had steeper landscapes for offers that led to consistent decisions. Therefore, we provide neural evidence that energy landscapes predict decision consistency, which reflects decision confidence.


Subject(s)
Choice Behavior , Decision Making , Animals , Prefrontal Cortex , Brain , Macaca mulatta , Reward
12.
bioRxiv ; 2023 Oct 11.
Article in English | MEDLINE | ID: mdl-37873311

ABSTRACT

Reinforcement learning (RL) is a theoretical framework that describes how agents learn to select options that maximize rewards and minimize punishments over time. We often make choices, however, to obtain symbolic reinforcers (e.g. money, points) that can later be exchanged for primary reinforcers (e.g. food, drink). Although symbolic reinforcers are motivating, little is understood about the neural or computational mechanisms underlying the motivation to earn them. In the present study, we examined how monkeys learn to make choices that maximize fluid rewards through reinforcement with tokens. The question addressed here is how the value of a state, which is a function of multiple task features (e.g. current number of accumulated tokens, choice options, task epoch, trials since last delivery of primary reinforcer, etc.), drives value and affects motivation. We constructed a Markov decision process model that computes the value of task states given task features to capture the motivational state of the animal. Fixation times, choice reaction times, and abort frequency were all significantly related to values of task states during the tokens task (n=5 monkeys). Furthermore, the model makes predictions for how neural responses could change on a moment-by-moment basis relative to changes in state value. Together, this task and model allow us to capture learning and behavior related to symbolic reinforcement.

13.
Neuron ; 111(23): 3802-3818.e5, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-37776852

ABSTRACT

Various specialized structural/functional properties are considered essential for contextual memory encoding by hippocampal mossy fiber (MF) synapses. Although investigated to exquisite detail in model organisms, synapses, including MFs, have undergone minimal functional interrogation in humans. To determine the translational relevance of rodent findings, we evaluated MF properties within human tissue resected to treat epilepsy. Human MFs exhibit remarkably similar hallmark features to rodents, including AMPA receptor-dominated synapses with small contributions from NMDA and kainate receptors, large dynamic range with strong frequency facilitation, NMDA receptor-independent presynaptic long-term potentiation, and strong cyclic AMP (cAMP) sensitivity of release. Array tomography confirmed the evolutionary conservation of MF ultrastructure. The astonishing congruence of rodent and human MF core features argues that the basic MF properties delineated in animal models remain critical to human MF function. Finally, a selective deficit in GABAergic inhibitory tone onto human MF postsynaptic targets suggests that unrestrained detonator excitatory drive contributes to epileptic circuit hyperexcitability.


Subject(s)
Mossy Fibers, Hippocampal , Synapses , Animals , Humans , Mossy Fibers, Hippocampal/physiology , Synapses/physiology , Long-Term Potentiation/physiology , Signal Transduction
14.
Curr Res Neurobiol ; 4: 100091, 2023.
Article in English | MEDLINE | ID: mdl-37397810

ABSTRACT

Genetically encoded synthetic receptors, such as the chemogenetic and optogenetic proteins, are powerful tools for functional brain studies in animals. In the primate brain, with its comparatively large, intricate anatomical structures, it can be challenging to express transgenes, such as the hM4Di chemogenetic receptor, in a defined anatomical structure with high penetrance. Here, we compare parameters for lentivirus vector injections in the rhesus monkey amygdala. We find that four injections of 20 µl, infused at 0.5 µl/min, can achieve neuronal hM4Di expression in 50-100% of neurons within a 60 mm3 volume, without observable damage from overexpression. Increasing the number of hM4Di_CFP lentivirus injections to up to 12 sites per hemisphere, resulted in 30%-40% neuronal coverage of the overall amygdala volume, with coverage reaching 60% in some subnuclei. Manganese Chloride was mixed with lentivirus and used as an MRI marker to verify targeting accuracy and correct unsuccessful injections in these experiments. In a separate monkey we visualized, in vivo, viral expression of the hM4Di receptor protein in the amygdala, using Positron Emission Tomography. Together, these data show efficient and verifiable expression of a chemogenetic receptor in old-world monkey amygdala.

15.
Behav Neurosci ; 137(4): 268-280, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37141014

ABSTRACT

The ventral striatum (VS) and amygdala are two structures often implicated as essential structures for learning. The literature addressing the contribution of these areas to learning, however, is not entirely consistent. We propose that these inconsistencies are due to learning environments and the effect they have on motivation. To differentiate aspects of learning from environmental factors that affect motivation, we ran a series of experiments with varying task factors. We compared monkeys (Macaca mulatta) with VS lesions, amygdala lesions, and unoperated controls on reinforcement learning (RL) tasks that involve learning from both gains and losses as well as from deterministic and stochastic schedules of reinforcement. We found that for all three groups, performance varied by experiment. All three groups modulated their behavior in the same directions, to varying degrees, across the three experiments. This behavioral modulation is why we find deficits in some experiments, but not others. The amount of effort animals exhibited differed depending on the learning environment. Our results suggest that the VS is important for the amount of effort animals will give in rich deterministic and relatively leaner stochastic learning enivornments. We also showed that monkeys with amygdala lesions can learn stimulus-based RL in stochastic environments and environments with loss and conditioned reinforcers. These results show that learning environments shape motivation and that the VS is essential for distinct aspects of motivated behavior. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Motivation , Ventral Striatum , Animals , Reinforcement, Psychology , Amygdala , Choice Behavior , Macaca mulatta , Reward
16.
Hear Res ; 433: 108768, 2023 06.
Article in English | MEDLINE | ID: mdl-37075536

ABSTRACT

The auditory system transforms auditory stimuli from the external environment into perceptual auditory objects. Recent studies have focused on the contribution of the auditory cortex to this transformation. Other studies have yielded important insights into the contributions of neural activity in the auditory cortex to cognition and decision-making. However, despite this important work, the relationship between auditory-cortex activity and behavior/perception has not been fully elucidated. Two of the more important gaps in our understanding are (1) the specific and differential contributions of different fields of the auditory cortex to auditory perception and behavior and (2) the way networks of auditory neurons impact and facilitate auditory information processing. Here, we focus on recent work from non-human-primate models of hearing and review work related to these gaps and put forth challenges to further our understanding of how single-unit activity and network activity in different cortical fields contribution to behavior and perception.


Subject(s)
Auditory Cortex , Animals , Auditory Cortex/physiology , Auditory Perception/physiology , Primates , Hearing Tests , Neurons/physiology , Acoustic Stimulation
17.
Cogn Dev ; 662023.
Article in English | MEDLINE | ID: mdl-37033205

ABSTRACT

Previous research showed that uncertain, stochastic feedback drastically reduces children's performance. Here, 145 children from 7 to 11 years learned sets of sequences of four left-right button presses, each press followed by a red/green signal. After each of the 15% randomly false feedback trials, children received a verbal debrief that it was either (1) a mistake, or (2) a lie, or (3) received a reassuring comment for 85% correct trials. The control group received no verbal debrief. In the stochastic condition children reflected more on previous trials than with 100% correct feedback. Verbal debriefs helped children to overcome the deterioration of the first two repetitions. Mistakes were discarded and therefore the most helpful comment. Lie debriefs yielded the most reflection on previous experience. Reassurance comments were not quite as efficient.

18.
PLoS Comput Biol ; 19(1): e1010873, 2023 01.
Article in English | MEDLINE | ID: mdl-36716320

ABSTRACT

Choice impulsivity is characterized by the choice of immediate, smaller reward options over future, larger reward options, and is often thought to be associated with negative life outcomes. However, some environments make future rewards more uncertain, and in these environments impulsive choices can be beneficial. Here we examined the conditions under which impulsive vs. non-impulsive decision strategies would be advantageous. We used Markov Decision Processes (MDPs) to model three common decision-making tasks: Temporal Discounting, Information Sampling, and an Explore-Exploit task. We manipulated environmental variables to create circumstances where future outcomes were relatively uncertain. We then manipulated the discount factor of an MDP agent, which affects the value of immediate versus future rewards, to model impulsive and non-impulsive behavior. This allowed us to examine the performance of impulsive and non-impulsive agents in more or less predictable environments. In Temporal Discounting, we manipulated the transition probability to delayed rewards and found that the agent with the lower discount factor (i.e. the impulsive agent) collected more average reward than the agent with a higher discount factor (the non-impulsive agent) by selecting immediate reward options when the probability of receiving the future reward was low. In the Information Sampling task, we manipulated the amount of information obtained with each sample. When sampling led to small information gains, the impulsive MDP agent collected more average reward than the non-impulsive agent. Third, in the Explore-Exploit task, we manipulated the substitution rate for novel options. When the substitution rate was high, the impulsive agent again performed better than the non-impulsive agent, as it explored the novel options less and instead exploited options with known reward values. The results of these analyses show that impulsivity can be advantageous in environments that are unexpectedly uncertain.


Subject(s)
Impulsive Behavior , Reward , Uncertainty , Probability , Markov Chains , Choice Behavior
19.
Brain Struct Funct ; 228(2): 393-411, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36271258

ABSTRACT

The primate forebrain is a complex structure. Thousands of connections have been identified between cortical areas, and between cortical and sub-cortical areas. Previous work, however, has suggested that a number of principles can be used to reduce this complexity. Here, we integrate four principles that have been put forth previously, including a nested model of neocortical connectivity, gradients of connectivity between frontal cortical areas and the striatum and thalamus, shared patterns of sub-cortical connectivity between connected posterior and frontal cortical areas, and topographic organization of cortical-striatal-pallidal-thalamocortical circuits. We integrate these principles into a single model that accounts for a substantial amount of connectivity in the forebrain. We then suggest that studies in evolution and development can account for these four principles, by assuming that the ancestral vertebrate pallium was dominated by medial, hippocampal and ventral-lateral, pyriform areas, and at most a small dorsal pallium. The small dorsal pallium expanded massively in the lineage leading to primates. During this expansion, topological, adjacency relationships were maintained between pallial and sub-pallial areas. This maintained topology led to the connectivity gradients seen between cortex, striatum, pallidum, and thalamus.


Subject(s)
Prosencephalon , Thalamus , Animals , Primates , Frontal Lobe , Vertebrates , Neural Pathways
20.
Neuron ; 110(18): 2949-2960.e4, 2022 09 21.
Article in English | MEDLINE | ID: mdl-35931070

ABSTRACT

Transmission from striatal cholinergic interneurons (CINs) controls dopamine release through nicotinic acetylcholine receptors (nAChRs) on dopaminergic axons. Anatomical studies suggest that cholinergic terminals signal predominantly through non-synaptic volume transmission. However, the influence of cholinergic transmission on electrical signaling in axons remains unclear. We examined axo-axonal transmission from CINs onto dopaminergic axons using perforated-patch recordings, which revealed rapid spontaneous EPSPs with properties characteristic of fast synapses. Pharmacology showed that axonal EPSPs (axEPSPs) were mediated primarily by high-affinity α6-containing receptors. Remarkably, axEPSPs triggered spontaneous action potentials, suggesting that these axons perform integration to convert synaptic input into spiking, a function associated with somatodendritic compartments. We investigated the cross-species validity of cholinergic axo-axonal transmission by recording dopaminergic axons in macaque putamen and found similar axEPSPs. Thus, we reveal that synaptic-like neurotransmission underlies cholinergic signaling onto dopaminergic axons, supporting the idea that striatal dopamine release can occur independently of somatic firing to provide distinct signaling.


Subject(s)
Dopamine , Receptors, Nicotinic , Axons/metabolism , Cholinergic Agents , Cholinergic Fibers/metabolism , Corpus Striatum/physiology , Dopamine/physiology , Interneurons/metabolism , Receptors, Nicotinic/metabolism , Synaptic Transmission/physiology
SELECTION OF CITATIONS
SEARCH DETAIL