Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artículo en Inglés | MEDLINE | ID: mdl-37523562

RESUMEN

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Asunto(s)
Modelos Neurológicos , N-Metilaspartato , Aprendizaje/fisiología , Neuronas/fisiología , Percepción
2.
PLoS Comput Biol ; 20(6): e1012047, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38865345

RESUMEN

A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.


Asunto(s)
Teorema de Bayes , Dendritas , Modelos Neurológicos , Plasticidad Neuronal , Sinapsis , Dendritas/fisiología , Animales , Plasticidad Neuronal/fisiología , Sinapsis/fisiología , Simulación por Computador , Señales (Psicología) , Biología Computacional , Neuronas/fisiología , Potenciales de Acción/fisiología
3.
J Sleep Res ; 32(4): e13846, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36806335

RESUMEN

Slow-wave sleep (SWS) is a fundamental physiological process, and its modulation is of interest for basic science and clinical applications. However, automatised protocols for the suppression of SWS are lacking. We describe the development of a novel protocol for the automated detection (based on the whole head topography of frontal slow waves) and suppression of SWS (through closed-loop modulated randomised pulsed noise), and assessed the feasibility, efficacy and functional relevance compared to sham stimulation in 15 healthy young adults in a repeated-measure sleep laboratory study. Auditory compared to sham stimulation resulted in a highly significant reduction of SWS by 30% without affecting total sleep time. The reduction of SWS was associated with an increase in lighter non-rapid eye movement sleep and a shift of slow-wave activity towards the end of the night, indicative of a homeostatic response and functional relevance. Still, cumulative slow-wave activity across the night was significantly reduced by 23%. Undisturbed sleep led to an evening to morning reduction of wake electroencephalographic theta activity, thought to reflect synaptic downscaling during SWS, while suppression of SWS inhibited this dissipation. We provide evidence for the feasibility, efficacy, and functional relevance of a novel fully automated protocol for SWS suppression based on auditory closed-loop stimulation. Future work is needed to further test for functional relevance and potential clinical applications.


Asunto(s)
Sueño de Onda Lenta , Adulto Joven , Humanos , Sueño de Onda Lenta/fisiología , Estudios de Factibilidad , Sueño/fisiología , Polisomnografía , Electroencefalografía/métodos , Estimulación Acústica/métodos
4.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35324886

RESUMEN

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Asunto(s)
Modelos Neurológicos , Neuronas , Potenciales de Acción , Encéfalo , Simulación por Computador , Redes Neurales de la Computación
5.
J Neurosci ; 40(46): 8799-8815, 2020 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-33046549

RESUMEN

Signal propagation in the dendrites of many neurons, including cortical pyramidal neurons in sensory cortex, is characterized by strong attenuation toward the soma. In contrast, using dual whole-cell recordings from the apical dendrite and soma of layer 5 (L5) pyramidal neurons in the anterior cingulate cortex (ACC) of adult male mice we found good coupling, particularly of slow subthreshold potentials like NMDA spikes or trains of EPSPs from dendrite to soma. Only the fastest EPSPs in the ACC were reduced to a similar degree as in primary somatosensory cortex, revealing differential low-pass filtering capabilities. Furthermore, L5 pyramidal neurons in the ACC did not exhibit dendritic Ca2+ spikes as prominently found in the apical dendrite of S1 (somatosensory cortex) pyramidal neurons. Fitting the experimental data to a NEURON model revealed that the specific distribution of Ileak, Iir, Im , and Ih was sufficient to explain the electrotonic dendritic structure causing a leaky distal dendritic compartment with correspondingly low input resistance and a compact perisomatic region, resulting in a decoupling of distal tuft branches from each other while at the same time efficiently connecting them to the soma. Our results give a biophysically plausible explanation of how a class of prefrontal cortical pyramidal neurons achieve efficient integration of subthreshold distal synaptic inputs compared with the same cell type in sensory cortices.SIGNIFICANCE STATEMENT Understanding cortical computation requires the understanding of its fundamental computational subunits. Layer 5 pyramidal neurons are the main output neurons of the cortex, integrating synaptic inputs across different cortical layers. Their elaborate dendritic tree receives, propagates, and transforms synaptic inputs into action potential output. We found good coupling of slow subthreshold potentials like NMDA spikes or trains of EPSPs from the distal apical dendrite to the soma in pyramidal neurons in the ACC, which was significantly better compared with S1. This suggests that frontal pyramidal neurons use a different integration scheme compared with the same cell type in somatosensory cortex, which has important implications for our understanding of information processing across different parts of the neocortex.


Asunto(s)
Dendritas/fisiología , Giro del Cíngulo/fisiología , Células Piramidales/fisiología , Corteza Somatosensorial/fisiología , Potenciales de Acción/fisiología , Animales , Fenómenos Electrofisiológicos , Potenciales Postsinápticos Excitadores , Técnicas In Vitro , Masculino , Ratones , Ratones Endogámicos C57BL , Optogenética , Receptores de N-Metil-D-Aspartato/fisiología
6.
PLoS Comput Biol ; 12(2): e1004638, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26841235

RESUMEN

In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials.


Asunto(s)
Dendritas/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Potenciales de Acción/fisiología , Algoritmos , Biología Computacional
7.
PLoS Comput Biol ; 12(6): e1005003, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27341100

RESUMEN

Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).


Asunto(s)
Potenciales de Acción/fisiología , Biología Computacional/métodos , Modelos Neurológicos , Neuronas/fisiología , Animales , Dendritas/fisiología , Macaca , Plasticidad Neuronal/fisiología
8.
J Neurosci ; 34(17): 5754-64, 2014 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-24760836

RESUMEN

Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain. Yet, it remains elusive which plasticity mechanisms occur in cortical circuits. We investigated the properties of neural networks in the anterior cingulate cortex (ACC), a brain region mediating affective responses to noxious stimuli. We performed multiple whole-cell recordings from neurons in layer 5 (L5) of the ACC of adult mice after chronic constriction injury of the sciatic nerve of the left hindpaw and observed a striking loss of connections between excitatory and inhibitory neurons in both directions. In contrast, no significant changes in synaptic efficacy in the remaining connected pairs were found. These changes were reflected on the network level by a decrease in the mEPSC and mIPSC frequency. Additionally, nerve injury resulted in a potentiation of the intrinsic excitability of pyramidal neurons, whereas the cellular properties of interneurons were unchanged. Our set of experimental parameters allowed constructing a neuronal network model of L5 in the ACC, revealing that the modification of inhibitory connectivity had the most profound effect on increased network activity. Thus, our combined experimental and modeling approach suggests that cortical disinhibition is a fundamental pathological modification associated with peripheral nerve damage. These changes at the cortical network level might therefore contribute to the neuropathic pain condition.


Asunto(s)
Giro del Cíngulo/fisiopatología , Inhibición Neural/fisiología , Neuralgia/fisiopatología , Traumatismos de los Nervios Periféricos/fisiopatología , Nervio Ciático/lesiones , Animales , Modelos Animales de Enfermedad , Masculino , Ratones , Ratones Endogámicos C57BL , Neuralgia/etiología , Neuronas/fisiología , Umbral del Dolor/fisiología , Traumatismos de los Nervios Periféricos/complicaciones , Nervio Ciático/fisiopatología , Transmisión Sináptica/fisiología
9.
PLoS Comput Biol ; 10(6): e1003640, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24901935

RESUMEN

Recent experiments revealed that the fruit fly Drosophila melanogaster has a dedicated mechanism for forgetting: blocking the G-protein Rac leads to slower and activating Rac to faster forgetting. This active form of forgetting lacks a satisfactory functional explanation. We investigated optimal decision making for an agent adapting to a stochastic environment where a stimulus may switch between being indicative of reward or punishment. Like Drosophila, an optimal agent shows forgetting with a rate that is linked to the time scale of changes in the environment. Moreover, to reduce the odds of missing future reward, an optimal agent may trade the risk of immediate pain for information gain and thus forget faster after aversive conditioning. A simple neuronal network reproduces these features. Our theory shows that forgetting in Drosophila appears as an optimal adaptive behavior in a changing environment. This is in line with the view that forgetting is adaptive rather than a consequence of limitations of the memory system.


Asunto(s)
Drosophila melanogaster/fisiología , Memoria/fisiología , Adaptación Fisiológica , Adaptación Psicológica , Animales , Conducta Animal/fisiología , Biología Computacional , Condicionamiento Psicológico , Toma de Decisiones/fisiología , Ambiente , Aprendizaje/fisiología , Modelos Biológicos , Modelos Psicológicos , Odorantes , Recompensa , Procesos Estocásticos
10.
Nature ; 457(7233): 1137-41, 2009 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-19151696

RESUMEN

The computational power of single neurons is greatly enhanced by active dendritic conductances that have a large influence on their spike activity. In cortical output neurons such as the large pyramidal cells of layer 5 (L5), activation of apical dendritic calcium channels leads to plateau potentials that increase the gain of the input/output function and switch the cell to burst-firing mode. The apical dendrites are innervated by local excitatory and inhibitory inputs as well as thalamic and corticocortical projections, which makes it a formidable task to predict how these inputs influence active dendritic properties in vivo. Here we investigate activity in populations of L5 pyramidal dendrites of the somatosensory cortex in awake and anaesthetized rats following sensory stimulation using a new fibre-optic method for recording dendritic calcium changes. We show that the strength of sensory stimulation is encoded in the combined dendritic calcium response of a local population of L5 pyramidal cells in a graded manner. The slope of the stimulus-response function was under the control of a particular subset of inhibitory neurons activated by synaptic inputs predominantly in L5. Recordings from single apical tuft dendrites in vitro showed that activity in L5 pyramidal neurons disynaptically coupled via interneurons directly blocks the initiation of dendritic calcium spikes in neighbouring pyramidal neurons. The results constitute a functional description of a cortical microcircuit in awake animals that relies on the active properties of L5 pyramidal dendrites and their very high sensitivity to inhibition. The microcircuit is organized so that local populations of apical dendrites can adaptively encode bottom-up sensory stimuli linearly across their full dynamic range.


Asunto(s)
Dendritas/fisiología , Interneuronas/fisiología , Corteza Somatosensorial/citología , Corteza Somatosensorial/fisiología , Anestesia , Animales , Calcio/metabolismo , Estimulación Eléctrica , Potenciales Postsinápticos Excitadores/fisiología , Femenino , Modelos Neurológicos , Ratas , Ratas Wistar , Vigilia/fisiología
11.
J Neurosci ; 33(23): 9565-75, 2013 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-23739954

RESUMEN

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.


Asunto(s)
Potenciales de Acción/fisiología , Recuerdo Mental/fisiología , Redes Neurales de la Computación , Aprendizaje/fisiología , Modelos Neurológicos , Sinapsis/fisiología
12.
Neurosci Biobehav Rev ; 157: 105508, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38097096

RESUMEN

Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.


Asunto(s)
Sueños , Imaginación , Humanos , Sueños/fisiología , Imaginación/fisiología , Sueño , Encéfalo , Sensación
13.
Neuron ; 112(10): 1531-1552, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38447578

RESUMEN

How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.


Asunto(s)
Encéfalo , Estado de Conciencia , Estado de Conciencia/fisiología , Humanos , Encéfalo/fisiología , Modelos Neurológicos , Neuronas/fisiología , Animales
14.
PLoS Comput Biol ; 8(9): e1002691, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23028289

RESUMEN

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.


Asunto(s)
Potenciales de Acción/fisiología , Encéfalo/fisiología , Conducta Competitiva/fisiología , Toma de Decisiones/fisiología , Teoría del Juego , Modelos Neurológicos , Red Nerviosa/fisiología , Simulación por Computador , Humanos
15.
PLoS Comput Biol ; 7(6): e1002092, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21738460

RESUMEN

In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.


Asunto(s)
Aprendizaje/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Sinapsis/fisiología , Algoritmos , Animales , Biología Computacional , Simulación por Computador , Toma de Decisiones/fisiología , Perros , Cadenas de Markov , Memoria , Recompensa , Transducción de Señal , Factores de Tiempo
16.
Elife ; 112022 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-35467527

RESUMEN

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.


Asunto(s)
Neuronas , Sinapsis , Aprendizaje/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Sinapsis/fisiología
17.
Elife ; 112022 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-35384841

RESUMEN

Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.


Asunto(s)
Sueños , Sueño de Onda Lenta , Animales , Sueño , Sueño REM , Vigilia
18.
Elife ; 102021 10 28.
Artículo en Inglés | MEDLINE | ID: mdl-34709176

RESUMEN

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called 'plasticity rules', is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition ­ that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural ­ and reasonable ­ tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on 'evolutionary algorithms'. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner ­ an example of reinforcement learning. Finally, in the third 'supervised learning' scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers 'learn' will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.


Asunto(s)
Red Nerviosa , Plasticidad Neuronal , Neuronas/fisiología , Animales , Humanos , Modelos Neurológicos
19.
Elife ; 102021 01 26.
Artículo en Inglés | MEDLINE | ID: mdl-33494860

RESUMEN

Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.


Asunto(s)
Potenciales de Acción/fisiología , Dendritas/fisiología , Sinapsis/fisiología
20.
Sci Rep ; 11(1): 6795, 2021 03 24.
Artículo en Inglés | MEDLINE | ID: mdl-33762640

RESUMEN

Olfactory learning and conditioning in the fruit fly is typically modelled by correlation-based associative synaptic plasticity. It was shown that the conditioning of an odor-evoked response by a shock depends on the connections from Kenyon cells (KC) to mushroom body output neurons (MBONs). Although on the behavioral level conditioning is recognized to be predictive, it remains unclear how MBONs form predictions of aversive or appetitive values (valences) of odors on the circuit level. We present behavioral experiments that are not well explained by associative plasticity between conditioned and unconditioned stimuli, and we suggest two alternative models for how predictions can be formed. In error-driven predictive plasticity, dopaminergic neurons (DANs) represent the error between the predictive odor value and the shock strength. In target-driven predictive plasticity, the DANs represent the target for the predictive MBON activity. Predictive plasticity in KC-to-MBON synapses can also explain trace-conditioning, the valence-dependent sign switch in plasticity, and the observed novelty-familiarity representation. The model offers a framework to dissect MBON circuits and interpret DAN activity during olfactory learning.


Asunto(s)
Reacción de Prevención/fisiología , Drosophila/fisiología , Olfato/fisiología , Animales , Neuronas Dopaminérgicas/fisiología , Modelos Biológicos , Cuerpos Pedunculados/fisiología , Plasticidad Neuronal , Procesos Estocásticos , Sinapsis/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA