Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Curr Biol ; 34(4): 841-854.e4, 2024 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-38325376

RESUMO

Sequential neural dynamics encoded by time cells play a crucial role in hippocampal function. However, the role of hippocampal sequential neural dynamics in associative learning is an open question. We used two-photon Ca2+ imaging of dorsal CA1 (dCA1) neurons in the stratum pyramidale (SP) in head-fixed mice performing a go-no go associative learning task to investigate how odor valence is temporally encoded in this area of the brain. We found that SP cells responded differentially to the rewarded or unrewarded odor. The stimuli were decoded accurately from the activity of the neuronal ensemble, and accuracy increased substantially as the animal learned to differentiate the stimuli. Decoding the stimulus from individual SP cells responding differentially revealed that decision-making took place at discrete times after stimulus presentation. Lick prediction decoded from the ensemble activity of cells in dCA1 correlated linearly with lick behavior. Our findings indicate that sequential activity of SP cells in dCA1 constitutes a temporal memory map used for decision-making in associative learning. VIDEO ABSTRACT.


Assuntos
Região CA1 Hipocampal , Hipocampo , Camundongos , Animais , Região CA1 Hipocampal/fisiologia , Neurônios/fisiologia , Aprendizagem , Condicionamento Clássico
2.
Elife ; 122023 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-36881019

RESUMO

The ability to associate sensory stimuli with abstract classes is critical for survival. How are these associations implemented in brain circuits? And what governs how neural activity evolves during abstract knowledge acquisition? To investigate these questions, we consider a circuit model that learns to map sensory input to abstract classes via gradient-descent synaptic plasticity. We focus on typical neuroscience tasks (simple, and context-dependent, categorization), and study how both synaptic connectivity and neural activity evolve during learning. To make contact with the current generation of experiments, we analyze activity via standard measures such as selectivity, correlations, and tuning symmetry. We find that the model is able to recapitulate experimental observations, including seemingly disparate ones. We determine how, in the model, the behaviour of these measures depends on details of the circuit and the task. These dependencies make experimentally testable predictions about the circuitry supporting abstract knowledge acquisition in the brain.


Assuntos
Aprendizagem , Neurociências , Encéfalo , Conhecimento , Plasticidade Neuronal
3.
Neural Comput ; 35(2): 105-155, 2023 01 20.
Artigo em Inglês | MEDLINE | ID: mdl-36543330

RESUMO

Binding operation is fundamental to many cognitive processes, such as cognitive map formation, relational reasoning, and language comprehension. In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms. Previous work has introduced a binding model based on quadratic functions of bound pairs, followed by vector summation of multiple pairs. Based on this framework, we address the following questions: Which classes of quadratic matrices are optimal for decoding relational structures? And what is the resultant accuracy? We introduce a new class of binding matrices based on a matrix representation of octonion algebra, an eight-dimensional extension of complex numbers. We show that these matrices enable a more accurate unbinding than previously known methods when a small number of pairs are present. Moreover, numerical optimization of a binding operator converges to this octonion binding. We also show that when there are a large number of bound pairs, however, a random quadratic binding performs, as well as the octonion and previously proposed binding methods. This study thus provides new insight into potential neural mechanisms of binding operations in the brain.


Assuntos
Encéfalo , Idioma
4.
Proc Natl Acad Sci U S A ; 119(11): e2100600119, 2022 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-35263217

RESUMO

SignificanceIn this work, we explore the hypothesis that biological neural networks optimize their architecture, through evolution, for learning. We study early olfactory circuits of mammals and insects, which have relatively similar structure but a huge diversity in size. We approximate these circuits as three-layer networks and estimate, analytically, the scaling of the optimal hidden-layer size with input-layer size. We find that both longevity and information in the genome constrain the hidden-layer size, so a range of allometric scalings is possible. However, the experimentally observed allometric scalings in mammals and insects are consistent with biologically plausible values. This analysis should pave the way for a deeper understanding of both biological and artificial networks.


Assuntos
Insetos , Aprendizagem , Mamíferos , Modelos Neurológicos , Condutos Olfatórios , Animais , Evolução Biológica , Contagem de Células , Aprendizagem/fisiologia , Corpos Pedunculados/citologia , Redes Neurais de Computação , Neurônios/citologia , Condutos Olfatórios/citologia , Condutos Olfatórios/crescimento & desenvolvimento , Córtex Piriforme/citologia
5.
Front Cell Neurosci ; 14: 613635, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33362477

RESUMO

Signal processing of odor inputs to the olfactory bulb (OB) changes through top-down modulation whose shaping of neural rhythms in response to changes in stimulus intensity is not understood. Here we asked whether the representation of a high vs. low intensity odorant in the OB by oscillatory neural activity changed as the animal learned to discriminate odorant concentration ranges in a go-no go task. We trained mice to discriminate between high vs. low concentration odorants by learning to lick to the rewarded group (low or high). We recorded the local field potential (LFP) in the OB of these mice and calculated the theta-referenced beta or gamma oscillation power (theta phase-referenced power, or tPRP). We found that as the mouse learned to differentiate odorant concentrations, tPRP diverged between trials for the rewarded vs. the unrewarded concentration range. For the proficient animal, linear discriminant analysis was able to predict the rewarded odorant group and the performance of this classifier correlated with the percent correct behavior in the odor concentration discrimination task. Interestingly, the behavioral response and decoding accuracy were asymmetric as a function of concentration when the rewarded stimulus was shifted between the high and low odorant concentration ranges. A model for decision making motivated by the statistics of OB activity that uses a single threshold in a logarithmic concentration scale displays this asymmetry. Taken together with previous studies on the intensity criteria for decisions on odorant concentrations, our finding suggests that OB oscillatory events facilitate decision making to classify concentrations using a single intensity criterion.

6.
Nat Commun ; 11(1): 3845, 2020 07 31.
Artigo em Inglês | MEDLINE | ID: mdl-32737295

RESUMO

Many experimental studies suggest that animals can rapidly learn to identify odors and predict the rewards associated with them. However, the underlying plasticity mechanism remains elusive. In particular, it is not clear how olfactory circuits achieve rapid, data efficient learning with local synaptic plasticity. Here, we formulate olfactory learning as a Bayesian optimization process, then map the learning rules into a computational model of the mammalian olfactory circuit. The model is capable of odor identification from a small number of observations, while reproducing cellular plasticity commonly observed during development. We extend the framework to reward-based learning, and show that the circuit is able to rapidly learn odor-reward association with a plausible neural architecture. These results deepen our theoretical understanding of unsupervised learning in the mammalian brain.


Assuntos
Condicionamento Clássico/fisiologia , Rede Nervosa , Plasticidade Neuronal/fisiologia , Condutos Olfatórios/fisiologia , Percepção Olfatória/fisiologia , Olfato/fisiologia , Animais , Teorema de Bayes , Simulação por Computador , Mamíferos , Neurônios/citologia , Neurônios/fisiologia , Odorantes/análise , Bulbo Olfatório/fisiologia , Recompensa
7.
PLoS Comput Biol ; 14(10): e1006400, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30296262

RESUMO

Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated contiguous elements in an input sequence based on unequal transition probabilities over sequence elements. However, recent experimental findings suggest that the brain is unlikely to adopt this method, as human subjects can chunk sequences with uniform transition probabilities. In this study, we propose a novel conceptual framework to overcome this limitation. In this process, neural networks learn to predict dynamical response patterns to sequence input rather than to directly learn transition patterns. Using a mutually supervising pair of reservoir computing modules, we demonstrate how this mechanism works in chunking sequences of letters or visual images with variable regularity and complexity. In addition, we demonstrate that background noise plays a crucial role in correctly learning chunks in this model. In particular, the model can successfully chunk sequences that conventional statistical approaches fail to chunk due to uniform transition probabilities. In addition, the neural responses of the model exhibit an interesting similarity to those of the basal ganglia observed after motor habit formation.


Assuntos
Modelos Neurológicos , Aprendizado de Máquina não Supervisionado , Encéfalo/fisiologia , Biologia Computacional , Humanos , Aprendizagem/fisiologia , Redes Neurais de Computação
8.
Proc Natl Acad Sci U S A ; 115(29): E6871-E6879, 2018 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-29967182

RESUMO

Recent experimental studies suggest that, in cortical microcircuits of the mammalian brain, the majority of neuron-to-neuron connections are realized by multiple synapses. However, it is not known whether such redundant synaptic connections provide any functional benefit. Here, we show that redundant synaptic connections enable near-optimal learning in cooperation with synaptic rewiring. By constructing a simple dendritic neuron model, we demonstrate that with multisynaptic connections synaptic plasticity approximates a sample-based Bayesian filtering algorithm known as particle filtering, and wiring plasticity implements its resampling process. Extending the proposed framework to a detailed single-neuron model of perceptual learning in the primary visual cortex, we show that the model accounts for many experimental observations. In particular, the proposed model reproduces the dendritic position dependence of spike-timing-dependent plasticity and the functional synaptic organization on the dendritic tree based on the stimulus selectivity of presynaptic neurons. Our study provides a conceptual framework for synaptic plasticity and rewiring.


Assuntos
Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Humanos , Neurônios/citologia
9.
J Neurosci ; 37(50): 12106-12122, 2017 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-29089443

RESUMO

The balance between excitatory and inhibitory inputs is a key feature of cortical dynamics. Such a balance is arguably preserved in dendritic branches, yet its underlying mechanism and functional roles remain unknown. In this study, we developed computational models of heterosynaptic spike-timing-dependent plasticity (STDP) to show that the excitatory/inhibitory balance in dendritic branches is robustly achieved through heterosynaptic interactions between excitatory and inhibitory synapses. The model reproduces key features of experimental heterosynaptic STDP well, and provides analytical insights. Furthermore, heterosynaptic STDP explains how the maturation of inhibitory neurons modulates the selectivity of excitatory neurons for binocular matching in the critical period plasticity. The model also provides an alternative explanation for the potential mechanism underlying the somatic detailed balance that is commonly associated with inhibitory STDP. Our results propose heterosynaptic STDP as a critical factor in synaptic organization and the resultant dendritic computation.SIGNIFICANCE STATEMENT Recent experimental studies reveal that relative differences in spike timings experienced among neighboring glutamatergic and GABAergic synapses on a dendritic branch significantly influences changes in the efficiency of these synapses. This heterosynaptic form of spike-timing-dependent plasticity (STDP) is potentially important for shaping the synaptic organization and computation of neurons, but its functional role remains elusive. Through computational modeling at the parameter regime where previous experimental results are well reproduced, we show that heterosynaptic plasticity serves to finely balance excitatory and inhibitory inputs on the dendrite. Our results suggest a principle of GABA-driven neural circuit formation.


Assuntos
Potenciais de Ação/fisiologia , Região CA1 Hipocampal/fisiologia , Sinalização do Cálcio/fisiologia , Simulação por Computador , Corpo Estriado/fisiologia , Dendritos/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Animais , Região CA1 Hipocampal/citologia , Corpo Estriado/citologia , Aprendizagem/fisiologia , Camundongos , Ratos , Sinapses/classificação , Fatores de Tempo , Ácido gama-Aminobutírico/fisiologia
10.
Front Neural Circuits ; 10: 41, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27303271

RESUMO

In the adult mammalian cortex, a small fraction of spines are created and eliminated every day, and the resultant synaptic connection structure is highly nonrandom, even in local circuits. However, it remains unknown whether a particular synaptic connection structure is functionally advantageous in local circuits, and why creation and elimination of synaptic connections is necessary in addition to rich synaptic weight plasticity. To answer these questions, we studied an inference task model through theoretical and numerical analyses. We demonstrate that a robustly beneficial network structure naturally emerges by combining Hebbian-type synaptic weight plasticity and wiring plasticity. Especially in a sparsely connected network, wiring plasticity achieves reliable computation by enabling efficient information transmission. Furthermore, the proposed rule reproduces experimental observed correlation between spine dynamics and task performance.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurogênese/fisiologia , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Animais , Humanos
11.
PLoS Comput Biol ; 11(4): e1004227, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25910189

RESUMO

The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.


Assuntos
Potenciais de Ação/fisiologia , Aprendizagem/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Transmissão Sináptica/fisiologia , Animais , Simulação por Computador , Retroalimentação Fisiológica/fisiologia , Humanos , Modelos Estatísticos , Estatística como Assunto
12.
PLoS One ; 9(7): e101535, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25007209

RESUMO

Various hippocampal and neocortical synapses of mammalian brain show both short-term plasticity and long-term plasticity, which are considered to underlie learning and memory by the brain. According to Hebb's postulate, synaptic plasticity encodes memory traces of past experiences into cell assemblies in cortical circuits. However, it remains unclear how the various forms of long-term and short-term synaptic plasticity cooperatively create and reorganize such cell assemblies. Here, we investigate the mechanism in which the three forms of synaptic plasticity known in cortical circuits, i.e., spike-timing-dependent plasticity (STDP), short-term depression (STD) and homeostatic plasticity, cooperatively generate, retain and reorganize cell assemblies in a recurrent neuronal network model. We show that multiple cell assemblies generated by external stimuli can survive noisy spontaneous network activity for an adequate range of the strength of STD. Furthermore, our model predicts that a symmetric temporal window of STDP, such as observed in dopaminergic modulations on hippocampal neurons, is crucial for the retention and integration of multiple cell assemblies. These results may have implications for the understanding of cortical memory processes.


Assuntos
Potenciação de Longa Duração , Plasticidade Neuronal , Potenciais de Ação , Algoritmos , Neurônios Dopaminérgicos/fisiologia , Hipocampo/fisiologia , Humanos , Modelos Neurológicos
13.
Front Comput Neurosci ; 6: 102, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23403536

RESUMO

The postsynaptic potentials of pyramidal neurons have a non-Gaussian amplitude distribution with a heavy tail in both hippocampus and neocortex. Such distributions of synaptic weights were recently shown to generate spontaneous internal noise optimal for spike propagation in recurrent cortical circuits. However, whether this internal noise generation by heavy-tailed weight distributions is possible for and beneficial to other computational functions remains unknown. To clarify this point, we construct an associative memory (AM) network model of spiking neurons that stores multiple memory patterns in a connection matrix with a lognormal weight distribution. In AM networks, non-retrieved memory patterns generate a cross-talk noise that severely disturbs memory recall. We demonstrate that neurons encoding a retrieved memory pattern and those encoding non-retrieved memory patterns have different subthreshold membrane-potential distributions in our model. Consequently, the probability of responding to inputs at strong synapses increases for the encoding neurons, whereas it decreases for the non-encoding neurons. Our results imply that heavy-tailed distributions of connection weights can generate noise useful for AM recall.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA