Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523562

RESUMO

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Assuntos
Modelos Neurológicos , N-Metilaspartato , Aprendizagem/fisiologia , Neurônios/fisiologia , Percepção
2.
PLoS Comput Biol ; 20(6): e1012047, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38865345

RESUMO

A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.


Assuntos
Teorema de Bayes , Dendritos , Modelos Neurológicos , Plasticidade Neuronal , Sinapses , Dendritos/fisiologia , Animais , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Simulação por Computador , Sinais (Psicologia) , Biologia Computacional , Neurônios/fisiologia , Potenciais de Ação/fisiologia
3.
J Sleep Res ; 32(4): e13846, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36806335

RESUMO

Slow-wave sleep (SWS) is a fundamental physiological process, and its modulation is of interest for basic science and clinical applications. However, automatised protocols for the suppression of SWS are lacking. We describe the development of a novel protocol for the automated detection (based on the whole head topography of frontal slow waves) and suppression of SWS (through closed-loop modulated randomised pulsed noise), and assessed the feasibility, efficacy and functional relevance compared to sham stimulation in 15 healthy young adults in a repeated-measure sleep laboratory study. Auditory compared to sham stimulation resulted in a highly significant reduction of SWS by 30% without affecting total sleep time. The reduction of SWS was associated with an increase in lighter non-rapid eye movement sleep and a shift of slow-wave activity towards the end of the night, indicative of a homeostatic response and functional relevance. Still, cumulative slow-wave activity across the night was significantly reduced by 23%. Undisturbed sleep led to an evening to morning reduction of wake electroencephalographic theta activity, thought to reflect synaptic downscaling during SWS, while suppression of SWS inhibited this dissipation. We provide evidence for the feasibility, efficacy, and functional relevance of a novel fully automated protocol for SWS suppression based on auditory closed-loop stimulation. Future work is needed to further test for functional relevance and potential clinical applications.


Assuntos
Sono de Ondas Lentas , Adulto Jovem , Humanos , Sono de Ondas Lentas/fisiologia , Estudos de Viabilidade , Sono/fisiologia , Polissonografia , Eletroencefalografia/métodos , Estimulação Acústica/métodos
4.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35324886

RESUMO

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação , Encéfalo , Simulação por Computador , Redes Neurais de Computação
5.
J Neurosci ; 40(46): 8799-8815, 2020 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-33046549

RESUMO

Signal propagation in the dendrites of many neurons, including cortical pyramidal neurons in sensory cortex, is characterized by strong attenuation toward the soma. In contrast, using dual whole-cell recordings from the apical dendrite and soma of layer 5 (L5) pyramidal neurons in the anterior cingulate cortex (ACC) of adult male mice we found good coupling, particularly of slow subthreshold potentials like NMDA spikes or trains of EPSPs from dendrite to soma. Only the fastest EPSPs in the ACC were reduced to a similar degree as in primary somatosensory cortex, revealing differential low-pass filtering capabilities. Furthermore, L5 pyramidal neurons in the ACC did not exhibit dendritic Ca2+ spikes as prominently found in the apical dendrite of S1 (somatosensory cortex) pyramidal neurons. Fitting the experimental data to a NEURON model revealed that the specific distribution of Ileak, Iir, Im , and Ih was sufficient to explain the electrotonic dendritic structure causing a leaky distal dendritic compartment with correspondingly low input resistance and a compact perisomatic region, resulting in a decoupling of distal tuft branches from each other while at the same time efficiently connecting them to the soma. Our results give a biophysically plausible explanation of how a class of prefrontal cortical pyramidal neurons achieve efficient integration of subthreshold distal synaptic inputs compared with the same cell type in sensory cortices.SIGNIFICANCE STATEMENT Understanding cortical computation requires the understanding of its fundamental computational subunits. Layer 5 pyramidal neurons are the main output neurons of the cortex, integrating synaptic inputs across different cortical layers. Their elaborate dendritic tree receives, propagates, and transforms synaptic inputs into action potential output. We found good coupling of slow subthreshold potentials like NMDA spikes or trains of EPSPs from the distal apical dendrite to the soma in pyramidal neurons in the ACC, which was significantly better compared with S1. This suggests that frontal pyramidal neurons use a different integration scheme compared with the same cell type in somatosensory cortex, which has important implications for our understanding of information processing across different parts of the neocortex.


Assuntos
Dendritos/fisiologia , Giro do Cíngulo/fisiologia , Células Piramidais/fisiologia , Córtex Somatossensorial/fisiologia , Potenciais de Ação/fisiologia , Animais , Fenômenos Eletrofisiológicos , Potenciais Pós-Sinápticos Excitadores , Técnicas In Vitro , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Optogenética , Receptores de N-Metil-D-Aspartato/fisiologia
6.
PLoS Comput Biol ; 12(2): e1004638, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26841235

RESUMO

In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials.


Assuntos
Dendritos/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Biologia Computacional
7.
PLoS Comput Biol ; 12(6): e1005003, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27341100

RESUMO

Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).


Assuntos
Potenciais de Ação/fisiologia , Biologia Computacional/métodos , Modelos Neurológicos , Neurônios/fisiologia , Animais , Dendritos/fisiologia , Macaca , Plasticidade Neuronal/fisiologia
8.
J Neurosci ; 34(17): 5754-64, 2014 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-24760836

RESUMO

Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain. Yet, it remains elusive which plasticity mechanisms occur in cortical circuits. We investigated the properties of neural networks in the anterior cingulate cortex (ACC), a brain region mediating affective responses to noxious stimuli. We performed multiple whole-cell recordings from neurons in layer 5 (L5) of the ACC of adult mice after chronic constriction injury of the sciatic nerve of the left hindpaw and observed a striking loss of connections between excitatory and inhibitory neurons in both directions. In contrast, no significant changes in synaptic efficacy in the remaining connected pairs were found. These changes were reflected on the network level by a decrease in the mEPSC and mIPSC frequency. Additionally, nerve injury resulted in a potentiation of the intrinsic excitability of pyramidal neurons, whereas the cellular properties of interneurons were unchanged. Our set of experimental parameters allowed constructing a neuronal network model of L5 in the ACC, revealing that the modification of inhibitory connectivity had the most profound effect on increased network activity. Thus, our combined experimental and modeling approach suggests that cortical disinhibition is a fundamental pathological modification associated with peripheral nerve damage. These changes at the cortical network level might therefore contribute to the neuropathic pain condition.


Assuntos
Giro do Cíngulo/fisiopatologia , Inibição Neural/fisiologia , Neuralgia/fisiopatologia , Traumatismos dos Nervos Periféricos/fisiopatologia , Nervo Isquiático/lesões , Animais , Modelos Animais de Doenças , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Neuralgia/etiologia , Neurônios/fisiologia , Limiar da Dor/fisiologia , Traumatismos dos Nervos Periféricos/complicações , Nervo Isquiático/fisiopatologia , Transmissão Sináptica/fisiologia
9.
PLoS Comput Biol ; 10(6): e1003640, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24901935

RESUMO

Recent experiments revealed that the fruit fly Drosophila melanogaster has a dedicated mechanism for forgetting: blocking the G-protein Rac leads to slower and activating Rac to faster forgetting. This active form of forgetting lacks a satisfactory functional explanation. We investigated optimal decision making for an agent adapting to a stochastic environment where a stimulus may switch between being indicative of reward or punishment. Like Drosophila, an optimal agent shows forgetting with a rate that is linked to the time scale of changes in the environment. Moreover, to reduce the odds of missing future reward, an optimal agent may trade the risk of immediate pain for information gain and thus forget faster after aversive conditioning. A simple neuronal network reproduces these features. Our theory shows that forgetting in Drosophila appears as an optimal adaptive behavior in a changing environment. This is in line with the view that forgetting is adaptive rather than a consequence of limitations of the memory system.


Assuntos
Drosophila melanogaster/fisiologia , Memória/fisiologia , Adaptação Fisiológica , Adaptação Psicológica , Animais , Comportamento Animal/fisiologia , Biologia Computacional , Condicionamento Psicológico , Tomada de Decisões/fisiologia , Meio Ambiente , Aprendizagem/fisiologia , Modelos Biológicos , Modelos Psicológicos , Odorantes , Recompensa , Processos Estocásticos
10.
Nature ; 457(7233): 1137-41, 2009 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-19151696

RESUMO

The computational power of single neurons is greatly enhanced by active dendritic conductances that have a large influence on their spike activity. In cortical output neurons such as the large pyramidal cells of layer 5 (L5), activation of apical dendritic calcium channels leads to plateau potentials that increase the gain of the input/output function and switch the cell to burst-firing mode. The apical dendrites are innervated by local excitatory and inhibitory inputs as well as thalamic and corticocortical projections, which makes it a formidable task to predict how these inputs influence active dendritic properties in vivo. Here we investigate activity in populations of L5 pyramidal dendrites of the somatosensory cortex in awake and anaesthetized rats following sensory stimulation using a new fibre-optic method for recording dendritic calcium changes. We show that the strength of sensory stimulation is encoded in the combined dendritic calcium response of a local population of L5 pyramidal cells in a graded manner. The slope of the stimulus-response function was under the control of a particular subset of inhibitory neurons activated by synaptic inputs predominantly in L5. Recordings from single apical tuft dendrites in vitro showed that activity in L5 pyramidal neurons disynaptically coupled via interneurons directly blocks the initiation of dendritic calcium spikes in neighbouring pyramidal neurons. The results constitute a functional description of a cortical microcircuit in awake animals that relies on the active properties of L5 pyramidal dendrites and their very high sensitivity to inhibition. The microcircuit is organized so that local populations of apical dendrites can adaptively encode bottom-up sensory stimuli linearly across their full dynamic range.


Assuntos
Dendritos/fisiologia , Interneurônios/fisiologia , Córtex Somatossensorial/citologia , Córtex Somatossensorial/fisiologia , Anestesia , Animais , Cálcio/metabolismo , Estimulação Elétrica , Potenciais Pós-Sinápticos Excitadores/fisiologia , Feminino , Modelos Neurológicos , Ratos , Ratos Wistar , Vigília/fisiologia
11.
J Neurosci ; 33(23): 9565-75, 2013 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-23739954

RESUMO

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.


Assuntos
Potenciais de Ação/fisiologia , Rememoração Mental/fisiologia , Redes Neurais de Computação , Aprendizagem/fisiologia , Modelos Neurológicos , Sinapses/fisiologia
12.
Neurosci Biobehav Rev ; 157: 105508, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38097096

RESUMO

Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.


Assuntos
Sonhos , Imaginação , Humanos , Sonhos/fisiologia , Imaginação/fisiologia , Sono , Encéfalo , Sensação
13.
Neuron ; 112(10): 1531-1552, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38447578

RESUMO

How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.


Assuntos
Encéfalo , Estado de Consciência , Estado de Consciência/fisiologia , Humanos , Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais
14.
PLoS Comput Biol ; 8(9): e1002691, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23028289

RESUMO

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.


Assuntos
Potenciais de Ação/fisiologia , Encéfalo/fisiologia , Comportamento Competitivo/fisiologia , Tomada de Decisões/fisiologia , Teoria dos Jogos , Modelos Neurológicos , Rede Nervosa/fisiologia , Simulação por Computador , Humanos
15.
PLoS Comput Biol ; 7(6): e1002092, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21738460

RESUMO

In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Algoritmos , Animais , Biologia Computacional , Simulação por Computador , Tomada de Decisões/fisiologia , Cães , Cadeias de Markov , Memória , Recompensa , Transdução de Sinais , Fatores de Tempo
16.
Elife ; 112022 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-35467527

RESUMO

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.


Assuntos
Neurônios , Sinapses , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia
17.
Elife ; 112022 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-35384841

RESUMO

Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.


Assuntos
Sonhos , Sono de Ondas Lentas , Animais , Sono , Sono REM , Vigília
18.
Elife ; 102021 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-34709176

RESUMO

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called 'plasticity rules', is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition ­ that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural ­ and reasonable ­ tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on 'evolutionary algorithms'. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner ­ an example of reinforcement learning. Finally, in the third 'supervised learning' scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers 'learn' will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.


Assuntos
Rede Nervosa , Plasticidade Neuronal , Neurônios/fisiologia , Animais , Humanos , Modelos Neurológicos
19.
Elife ; 102021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33494860

RESUMO

Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.


Assuntos
Potenciais de Ação/fisiologia , Dendritos/fisiologia , Sinapses/fisiologia
20.
Sci Rep ; 11(1): 6795, 2021 03 24.
Artigo em Inglês | MEDLINE | ID: mdl-33762640

RESUMO

Olfactory learning and conditioning in the fruit fly is typically modelled by correlation-based associative synaptic plasticity. It was shown that the conditioning of an odor-evoked response by a shock depends on the connections from Kenyon cells (KC) to mushroom body output neurons (MBONs). Although on the behavioral level conditioning is recognized to be predictive, it remains unclear how MBONs form predictions of aversive or appetitive values (valences) of odors on the circuit level. We present behavioral experiments that are not well explained by associative plasticity between conditioned and unconditioned stimuli, and we suggest two alternative models for how predictions can be formed. In error-driven predictive plasticity, dopaminergic neurons (DANs) represent the error between the predictive odor value and the shock strength. In target-driven predictive plasticity, the DANs represent the target for the predictive MBON activity. Predictive plasticity in KC-to-MBON synapses can also explain trace-conditioning, the valence-dependent sign switch in plasticity, and the observed novelty-familiarity representation. The model offers a framework to dissect MBON circuits and interpret DAN activity during olfactory learning.


Assuntos
Aprendizagem da Esquiva/fisiologia , Drosophila/fisiologia , Olfato/fisiologia , Animais , Neurônios Dopaminérgicos/fisiologia , Modelos Biológicos , Corpos Pedunculados/fisiologia , Plasticidade Neuronal , Processos Estocásticos , Sinapses/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA