Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 20(6): e1012047, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38865345

RESUMO

A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.


Assuntos
Teorema de Bayes , Dendritos , Modelos Neurológicos , Plasticidade Neuronal , Sinapses , Dendritos/fisiologia , Animais , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Simulação por Computador , Sinais (Psicologia) , Biologia Computacional , Neurônios/fisiologia , Potenciais de Ação/fisiologia
2.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523562

RESUMO

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Assuntos
Modelos Neurológicos , N-Metilaspartato , Aprendizagem/fisiologia , Neurônios/fisiologia , Percepção
3.
J Neurosci ; 40(46): 8799-8815, 2020 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-33046549

RESUMO

Signal propagation in the dendrites of many neurons, including cortical pyramidal neurons in sensory cortex, is characterized by strong attenuation toward the soma. In contrast, using dual whole-cell recordings from the apical dendrite and soma of layer 5 (L5) pyramidal neurons in the anterior cingulate cortex (ACC) of adult male mice we found good coupling, particularly of slow subthreshold potentials like NMDA spikes or trains of EPSPs from dendrite to soma. Only the fastest EPSPs in the ACC were reduced to a similar degree as in primary somatosensory cortex, revealing differential low-pass filtering capabilities. Furthermore, L5 pyramidal neurons in the ACC did not exhibit dendritic Ca2+ spikes as prominently found in the apical dendrite of S1 (somatosensory cortex) pyramidal neurons. Fitting the experimental data to a NEURON model revealed that the specific distribution of Ileak, Iir, Im , and Ih was sufficient to explain the electrotonic dendritic structure causing a leaky distal dendritic compartment with correspondingly low input resistance and a compact perisomatic region, resulting in a decoupling of distal tuft branches from each other while at the same time efficiently connecting them to the soma. Our results give a biophysically plausible explanation of how a class of prefrontal cortical pyramidal neurons achieve efficient integration of subthreshold distal synaptic inputs compared with the same cell type in sensory cortices.SIGNIFICANCE STATEMENT Understanding cortical computation requires the understanding of its fundamental computational subunits. Layer 5 pyramidal neurons are the main output neurons of the cortex, integrating synaptic inputs across different cortical layers. Their elaborate dendritic tree receives, propagates, and transforms synaptic inputs into action potential output. We found good coupling of slow subthreshold potentials like NMDA spikes or trains of EPSPs from the distal apical dendrite to the soma in pyramidal neurons in the ACC, which was significantly better compared with S1. This suggests that frontal pyramidal neurons use a different integration scheme compared with the same cell type in somatosensory cortex, which has important implications for our understanding of information processing across different parts of the neocortex.


Assuntos
Dendritos/fisiologia , Giro do Cíngulo/fisiologia , Células Piramidais/fisiologia , Córtex Somatossensorial/fisiologia , Potenciais de Ação/fisiologia , Animais , Fenômenos Eletrofisiológicos , Potenciais Pós-Sinápticos Excitadores , Técnicas In Vitro , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Optogenética , Receptores de N-Metil-D-Aspartato/fisiologia
4.
Cell Rep ; 26(7): 1759-1773.e7, 2019 02 12.
Artigo em Inglês | MEDLINE | ID: mdl-30759388

RESUMO

The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.


Assuntos
Potenciais de Ação/fisiologia , Neurônios/metabolismo , Humanos
5.
Neural Comput ; 27(12): 2587-622, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26496043

RESUMO

We prove that when a class of partial differential equations, generalized from the cable equation, is defined on tree graphs and the inputs are restricted to a spatially discrete, well chosen set of points, the Green's function (GF) formalism can be rewritten to scale as O(n) with the number n of inputs locations, contrary to the previously reported O(n(2)) scaling. We show that the linear scaling can be combined with an expansion of the remaining kernels as sums of exponentials to allow efficient simulations of equations from the aforementioned class. We furthermore validate this simulation paradigm on models of nerve cells and explore its relation with more traditional finite difference approaches. Situations in which a gain in computational performance is expected are discussed.


Assuntos
Dendritos/fisiologia , Modelos Neurológicos , Algoritmos , Axônios/fisiologia , Simulação por Computador , Modelos Lineares , Fibras Nervosas Mielinizadas/fisiologia , Dinâmica não Linear
6.
Biol Cybern ; 107(6): 685-94, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24037222

RESUMO

Neurons are spatially extended structures that receive and process inputs on their dendrites. It is generally accepted that neuronal computations arise from the active integration of synaptic inputs along a dendrite between the input location and the location of spike generation in the axon initial segment. However, many application such as simulations of brain networks use point-neurons-neurons without a morphological component-as computational units to keep the conceptual complexity and computational costs low. Inevitably, these applications thus omit a fundamental property of neuronal computation. In this work, we present an approach to model an artificial synapse that mimics dendritic processing without the need to explicitly simulate dendritic dynamics. The model synapse employs an analytic solution for the cable equation to compute the neuron's membrane potential following dendritic inputs. Green's function formalism is used to derive the closed version of the cable equation. We show that by using this synapse model, point-neurons can achieve results that were previously limited to the realms of multi-compartmental models. Moreover, a computational advantage is achieved when only a small number of simulated synapses impinge on a morphologically elaborate neuron. Opportunities and limitations are discussed.


Assuntos
Simulação por Computador , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Animais , Dendritos/fisiologia , Humanos , Potenciais da Membrana/fisiologia , Rede Nervosa/citologia , Neurônios/citologia , Sinapses/fisiologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...