Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 20(2): e1011839, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38377112

RESUMO

In humans and animals, surprise is a physiological reaction to an unexpected event, but how surprise can be linked to plausible models of neuronal activity is an open problem. We propose a self-supervised spiking neural network model where a surprise signal is extracted from an increase in neural activity after an imbalance of excitation and inhibition. The surprise signal modulates synaptic plasticity via a three-factor learning rule which increases plasticity at moments of surprise. The surprise signal remains small when transitions between sensory events follow a previously learned rule but increases immediately after rule switching. In a spiking network with several modules, previously learned rules are protected against overwriting, as long as the number of modules is larger than the total number of rules-making a step towards solving the stability-plasticity dilemma in neuroscience. Our model relates the subjective notion of surprise to specific predictions on the circuit level.


Assuntos
Inibição Psicológica , Neurociências , Animais , Humanos , Aprendizagem , Redes Neurais de Computação , Plasticidade Neuronal
2.
PLoS Comput Biol ; 20(2): e1011844, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38346073

RESUMO

Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input. Going beyond classical Hebbian learning, our learning objective explains the functional form of observed excitatory plasticity mechanisms, showing how Hebbian long-term depression (LTD) cancels the sensitivity to second-order correlations so that receptive fields become aligned with features hidden in higher-order statistics. Invariance to second-order correlations enhances the versatility of biologically realistic learning models, supporting optimal decoding from noisy inputs and sparse population coding from spatially correlated stimuli. In a spiking model with triplet spike-timing-dependent plasticity (STDP), we show that individual neurons can learn localized oriented receptive fields, circumventing the need for input preprocessing, such as whitening, or population-level lateral inhibition. The theory advances our understanding of local unsupervised learning in cortical circuits, offers new interpretations of the Bienenstock-Cooper-Munro and triplet STDP models, and assigns a specific functional role to synaptic LTD mechanisms in pyramidal neurons.


Assuntos
Plasticidade Neuronal , Neurônios , Potenciais de Ação/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Células Piramidais/fisiologia , Modelos Neurológicos
3.
PLoS Comput Biol ; 19(12): e1011727, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38117859

RESUMO

Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.


Assuntos
Aprendizagem , Reforço Psicológico , Aprendizagem/fisiologia , Rememoração Mental/fisiologia , Neurônios/fisiologia , Hipocampo/fisiologia , Plasticidade Neuronal/fisiologia , Modelos Neurológicos
4.
Behav Brain Sci ; 47: e94, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38770870

RESUMO

We link Ivancovsky et al.'s novelty-seeking model (NSM) to computational models of intrinsically motivated behavior and learning. We argue that dissociating different forms of curiosity, creativity, and memory based on the involvement of distinct intrinsic motivations (e.g., surprise and novelty) is essential to empirically test the conceptual claims of the NSM.


Assuntos
Criatividade , Comportamento Exploratório , Motivação , Humanos , Comportamento Exploratório/fisiologia , Modelos Psicológicos , Aprendizagem/fisiologia , Memória/fisiologia , Simulação por Computador
5.
Neuroimage ; 246: 118780, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34875383

RESUMO

Learning how to reach a reward over long series of actions is a remarkable capability of humans, and potentially guided by multiple parallel learning modules. Current brain imaging of learning modules is limited by (i) simple experimental paradigms, (ii) entanglement of brain signals of different learning modules, and (iii) a limited number of computational models considered as candidates for explaining behavior. Here, we address these three limitations and (i) introduce a complex sequential decision making task with surprising events that allows us to (ii) dissociate correlates of reward prediction errors from those of surprise in functional magnetic resonance imaging (fMRI); and (iii) we test behavior against a large repertoire of model-free, model-based, and hybrid reinforcement learning algorithms, including a novel surprise-modulated actor-critic algorithm. Surprise, derived from an approximate Bayesian approach for learning the world-model, is extracted in our algorithm from a state prediction error. Surprise is then used to modulate the learning rate of a model-free actor, which itself learns via the reward prediction error from model-free value estimation by the critic. We find that action choices are well explained by pure model-free policy gradient, but reaction times and neural data are not. We identify signatures of both model-free and surprise-based learning signals in blood oxygen level dependent (BOLD) responses, supporting the existence of multiple parallel learning modules in the brain. Our results extend previous fMRI findings to a multi-step setting and emphasize the role of policy gradient and surprise signalling in human learning.


Assuntos
Encéfalo/fisiologia , Tomada de Decisões/fisiologia , Neuroimagem Funcional/métodos , Aprendizagem/fisiologia , Imageamento por Ressonância Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Masculino , Modelos Biológicos , Reforço Psicológico , Adulto Jovem
6.
PLoS Comput Biol ; 17(12): e1009691, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34968383

RESUMO

Assemblies of neurons, called concepts cells, encode acquired concepts in human Medial Temporal Lobe. Those concept cells that are shared between two assemblies have been hypothesized to encode associations between concepts. Here we test this hypothesis in a computational model of attractor neural networks. We find that for concepts encoded in sparse neural assemblies there is a minimal fraction cmin of neurons shared between assemblies below which associations cannot be reliably implemented; and a maximal fraction cmax of shared neurons above which single concepts can no longer be retrieved. In the presence of a periodically modulated background signal, such as hippocampal oscillations, recall takes the form of association chains reminiscent of those postulated by theories of free recall of words. Predictions of an iterative overlap-generating model match experimental data on the number of concepts to which a neuron responds.


Assuntos
Memória/fisiologia , Modelos Neurológicos , Neurônios/citologia , Biologia Computacional , Hipocampo/citologia , Hipocampo/fisiologia , Humanos , Rede Nervosa/citologia , Rede Nervosa/fisiologia , Lobo Temporal/citologia , Lobo Temporal/fisiologia
7.
PLoS Comput Biol ; 17(6): e1009070, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34081705

RESUMO

Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.


Assuntos
Adaptação Psicológica , Comportamento Exploratório , Modelos Psicológicos , Reforço Psicológico , Algoritmos , Comportamento de Escolha/fisiologia , Biologia Computacional , Tomada de Decisões/fisiologia , Eletroencefalografia/estatística & dados numéricos , Comportamento Exploratório/fisiologia , Humanos , Aprendizagem/fisiologia , Modelos Neurológicos , Recompensa
8.
Neural Comput ; 33(2): 269-340, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33400898

RESUMO

Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments.


Assuntos
Algoritmos , Comportamento/fisiologia , Simulação por Computador , Aprendizagem/fisiologia , Reforço Psicológico , Animais , Teorema de Bayes , Humanos
9.
PLoS Comput Biol ; 16(4): e1007640, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32271761

RESUMO

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Because the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here, we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature, and propose ways forward to constrain the metric.


Assuntos
Biofísica/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Biologia Computacional/métodos , Algoritmos , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador , Cinética , Matemática , Redes Neurais de Computação , Neurociências/métodos
10.
PLoS Comput Biol ; 15(6): e1007122, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31181063

RESUMO

While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Biologia Computacional , Simulação por Computador , Razão Sinal-Ruído
11.
PLoS Comput Biol ; 15(4): e1006928, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-31002672

RESUMO

Continuous attractor models of working-memory store continuous-valued information in continuous state-spaces, but are sensitive to noise processes that degrade memory retention. Short-term synaptic plasticity of recurrent synapses has previously been shown to affect continuous attractor systems: short-term facilitation can stabilize memory retention, while short-term depression possibly increases continuous attractor volatility. Here, we present a comprehensive description of the combined effect of both short-term facilitation and depression on noise-induced memory degradation in one-dimensional continuous attractor models. Our theoretical description, applicable to rate models as well as spiking networks close to a stationary state, accurately describes the slow dynamics of stored memory positions as a combination of two processes: (i) diffusion due to variability caused by spikes; and (ii) drift due to random connectivity and neuronal heterogeneity. We find that facilitation decreases both diffusion and directed drifts, while short-term depression tends to increase both. Using mutual information, we evaluate the combined impact of short-term facilitation and depression on the ability of networks to retain stable working memory. Finally, our theory predicts the sensitivity of continuous working memory to distractor inputs and provides conditions for stability of memory.


Assuntos
Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Animais , Biologia Computacional , Estimulação Luminosa , Sinapses/fisiologia , Campos Visuais
12.
PLoS Comput Biol ; 14(7): e1006216, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29979674

RESUMO

The time scale of neuronal network dynamics is determined by synaptic interactions and neuronal signal integration, both of which occur on the time scale of milliseconds. Yet many behaviors like the generation of movements or vocalizations of sounds occur on the much slower time scale of seconds. Here we ask the question of how neuronal networks of the brain can support reliable behavior on this time scale. We argue that excitable neuronal assemblies with spike-frequency adaptation may serve as building blocks that can flexibly adjust the speed of execution of neural circuit function. We show in simulations that a chain of neuronal assemblies can propagate signals reliably, similar to the well-known synfire chain, but with the crucial difference that the propagation speed is slower and tunable to the behaviorally relevant range. Moreover we study a grid of excitable neuronal assemblies as a simplified model of the somatosensory barrel cortex of the mouse and demonstrate that various patterns of experimentally observed spatial activity propagation can be explained.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Potenciais de Ação/fisiologia , Animais , Camundongos , Córtex Somatossensorial/fisiologia , Potenciais Sinápticos/fisiologia
13.
Cereb Cortex ; 28(4): 1396-1415, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29300903

RESUMO

Excitatory synaptic connections in the adult neocortex consist of multiple synaptic contacts, almost exclusively formed on dendritic spines. Changes of spine volume, a correlate of synaptic strength, can be tracked in vivo for weeks. Here, we present a combined model of structural and spike-timing-dependent plasticity that explains the multicontact configuration of synapses in adult neocortical networks under steady-state and lesion-induced conditions. Our plasticity rule with Hebbian and anti-Hebbian terms stabilizes both the postsynaptic firing rate and correlations between the pre- and postsynaptic activity at an active synaptic contact. Contacts appear spontaneously at a low rate and disappear if their strength approaches zero. Many presynaptic neurons compete to make strong synaptic connections onto a postsynaptic neuron, whereas the synaptic contacts of a given presynaptic neuron co-operate via postsynaptic firing. We find that co-operation of multiple synaptic contacts is crucial for stable, long-term synaptic memories. In simulations of a simplified network model of barrel cortex, our plasticity rule reproduces whisker-trimming-induced rewiring of thalamocortical and recurrent synaptic connectivity on realistic time scales.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neocórtex/citologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Dendritos/fisiologia , Rede Nervosa/fisiologia , Neurônios/ultraestrutura , Dinâmica não Linear , Ratos , Fatores de Tempo
14.
Neural Comput ; 30(1): 34-83, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29064784

RESUMO

Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.


Assuntos
Adaptação Psicológica/fisiologia , Algoritmos , Tomada de Decisões , Aprendizagem/fisiologia , Neurônios/fisiologia , Meio Ambiente , Feminino , Humanos , Masculino , Modelos Neurológicos
15.
PLoS Comput Biol ; 13(4): e1005507, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28422957

RESUMO

Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Algoritmos , Biologia Computacional , Redes Neurais de Computação , Córtex Visual
16.
Neural Comput ; 29(2): 458-484, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27870611

RESUMO

We show that Hopfield neural networks with synchronous dynamics and asymmetric weights admit stable orbits that form sequences of maximal length. For [Formula: see text] units, these sequences have length [Formula: see text]; that is, they cover the full state space. We present a mathematical proof that maximal-length orbits exist for all [Formula: see text], and we provide a method to construct both the sequence and the weight matrix that allow its production. The orbit is relatively robust to dynamical noise, and perturbations of the optimal weights reveal other periodic orbits that are not maximal but typically still very long. We discuss how the resulting dynamics on slow time-scales can be used to generate desired output sequences.


Assuntos
Redes Neurais de Computação , Animais , Encéfalo/fisiologia , Humanos , Aprendizagem/fisiologia , Modelos Neurológicos , Vias Neurais/fisiologia , Neurônios/fisiologia
17.
PLoS Comput Biol ; 12(9): e1005070, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27690349

RESUMO

The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

18.
PLoS Comput Biol ; 12(2): e1004761, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26907675

RESUMO

The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neocórtex/citologia , Células Piramidais/fisiologia , Animais , Biologia Computacional , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Neocórtex/fisiologia , Dinâmica não Linear
19.
J Neurosci ; 35(3): 1319-34, 2015 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-25609644

RESUMO

Synaptic plasticity, a key process for memory formation, manifests itself across different time scales ranging from a few seconds for plasticity induction up to hours or even years for consolidation and memory retention. We developed a three-layered model of synaptic consolidation that accounts for data across a large range of experimental conditions. Consolidation occurs in the model through the interaction of the synaptic efficacy with a scaffolding variable by a read-write process mediated by a tagging-related variable. Plasticity-inducing stimuli modify the efficacy, but the state of tag and scaffold can only change if a write protection mechanism is overcome. Our model makes a link from depotentiation protocols in vitro to behavioral results regarding the influence of novelty on inhibitory avoidance memory in rats.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Simulação por Computador
20.
PLoS Comput Biol ; 11(6): e1004275, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26083597

RESUMO

Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons.


Assuntos
Potenciais de Ação/fisiologia , Biologia Computacional/métodos , Ensaios de Triagem em Larga Escala/métodos , Modelos Neurológicos , Neurônios/fisiologia , Animais , Encéfalo/citologia , Simulação por Computador , Eletrofisiologia , Masculino , Camundongos , Camundongos Endogâmicos C57BL
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA