Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
1.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Article in English | MEDLINE | ID: mdl-37523562

ABSTRACT

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Subject(s)
Models, Neurological , N-Methylaspartate , Learning/physiology , Neurons/physiology , Perception
2.
PLoS Comput Biol ; 20(6): e1012047, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38865345

ABSTRACT

A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.


Subject(s)
Bayes Theorem , Dendrites , Models, Neurological , Neuronal Plasticity , Synapses , Dendrites/physiology , Animals , Neuronal Plasticity/physiology , Synapses/physiology , Computer Simulation , Cues , Computational Biology , Neurons/physiology , Action Potentials/physiology
3.
J Sleep Res ; 32(4): e13846, 2023 08.
Article in English | MEDLINE | ID: mdl-36806335

ABSTRACT

Slow-wave sleep (SWS) is a fundamental physiological process, and its modulation is of interest for basic science and clinical applications. However, automatised protocols for the suppression of SWS are lacking. We describe the development of a novel protocol for the automated detection (based on the whole head topography of frontal slow waves) and suppression of SWS (through closed-loop modulated randomised pulsed noise), and assessed the feasibility, efficacy and functional relevance compared to sham stimulation in 15 healthy young adults in a repeated-measure sleep laboratory study. Auditory compared to sham stimulation resulted in a highly significant reduction of SWS by 30% without affecting total sleep time. The reduction of SWS was associated with an increase in lighter non-rapid eye movement sleep and a shift of slow-wave activity towards the end of the night, indicative of a homeostatic response and functional relevance. Still, cumulative slow-wave activity across the night was significantly reduced by 23%. Undisturbed sleep led to an evening to morning reduction of wake electroencephalographic theta activity, thought to reflect synaptic downscaling during SWS, while suppression of SWS inhibited this dissipation. We provide evidence for the feasibility, efficacy, and functional relevance of a novel fully automated protocol for SWS suppression based on auditory closed-loop stimulation. Future work is needed to further test for functional relevance and potential clinical applications.


Subject(s)
Sleep, Slow-Wave , Young Adult , Humans , Sleep, Slow-Wave/physiology , Feasibility Studies , Sleep/physiology , Polysomnography , Electroencephalography/methods , Acoustic Stimulation/methods
4.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Article in English | MEDLINE | ID: mdl-35324886

ABSTRACT

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Subject(s)
Models, Neurological , Neurons , Action Potentials , Brain , Computer Simulation , Neural Networks, Computer
5.
J Neurosci ; 40(46): 8799-8815, 2020 11 11.
Article in English | MEDLINE | ID: mdl-33046549

ABSTRACT

Signal propagation in the dendrites of many neurons, including cortical pyramidal neurons in sensory cortex, is characterized by strong attenuation toward the soma. In contrast, using dual whole-cell recordings from the apical dendrite and soma of layer 5 (L5) pyramidal neurons in the anterior cingulate cortex (ACC) of adult male mice we found good coupling, particularly of slow subthreshold potentials like NMDA spikes or trains of EPSPs from dendrite to soma. Only the fastest EPSPs in the ACC were reduced to a similar degree as in primary somatosensory cortex, revealing differential low-pass filtering capabilities. Furthermore, L5 pyramidal neurons in the ACC did not exhibit dendritic Ca2+ spikes as prominently found in the apical dendrite of S1 (somatosensory cortex) pyramidal neurons. Fitting the experimental data to a NEURON model revealed that the specific distribution of Ileak, Iir, Im , and Ih was sufficient to explain the electrotonic dendritic structure causing a leaky distal dendritic compartment with correspondingly low input resistance and a compact perisomatic region, resulting in a decoupling of distal tuft branches from each other while at the same time efficiently connecting them to the soma. Our results give a biophysically plausible explanation of how a class of prefrontal cortical pyramidal neurons achieve efficient integration of subthreshold distal synaptic inputs compared with the same cell type in sensory cortices.SIGNIFICANCE STATEMENT Understanding cortical computation requires the understanding of its fundamental computational subunits. Layer 5 pyramidal neurons are the main output neurons of the cortex, integrating synaptic inputs across different cortical layers. Their elaborate dendritic tree receives, propagates, and transforms synaptic inputs into action potential output. We found good coupling of slow subthreshold potentials like NMDA spikes or trains of EPSPs from the distal apical dendrite to the soma in pyramidal neurons in the ACC, which was significantly better compared with S1. This suggests that frontal pyramidal neurons use a different integration scheme compared with the same cell type in somatosensory cortex, which has important implications for our understanding of information processing across different parts of the neocortex.


Subject(s)
Dendrites/physiology , Gyrus Cinguli/physiology , Pyramidal Cells/physiology , Somatosensory Cortex/physiology , Action Potentials/physiology , Animals , Electrophysiological Phenomena , Excitatory Postsynaptic Potentials , In Vitro Techniques , Male , Mice , Mice, Inbred C57BL , Optogenetics , Receptors, N-Methyl-D-Aspartate/physiology
6.
PLoS Comput Biol ; 12(2): e1004638, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26841235

ABSTRACT

In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials.


Subject(s)
Dendrites/physiology , Models, Neurological , Neuronal Plasticity/physiology , Action Potentials/physiology , Algorithms , Computational Biology
7.
PLoS Comput Biol ; 12(6): e1005003, 2016 06.
Article in English | MEDLINE | ID: mdl-27341100

ABSTRACT

Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).


Subject(s)
Action Potentials/physiology , Computational Biology/methods , Models, Neurological , Neurons/physiology , Animals , Dendrites/physiology , Macaca , Neuronal Plasticity/physiology
8.
J Neurosci ; 34(17): 5754-64, 2014 Apr 23.
Article in English | MEDLINE | ID: mdl-24760836

ABSTRACT

Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain. Yet, it remains elusive which plasticity mechanisms occur in cortical circuits. We investigated the properties of neural networks in the anterior cingulate cortex (ACC), a brain region mediating affective responses to noxious stimuli. We performed multiple whole-cell recordings from neurons in layer 5 (L5) of the ACC of adult mice after chronic constriction injury of the sciatic nerve of the left hindpaw and observed a striking loss of connections between excitatory and inhibitory neurons in both directions. In contrast, no significant changes in synaptic efficacy in the remaining connected pairs were found. These changes were reflected on the network level by a decrease in the mEPSC and mIPSC frequency. Additionally, nerve injury resulted in a potentiation of the intrinsic excitability of pyramidal neurons, whereas the cellular properties of interneurons were unchanged. Our set of experimental parameters allowed constructing a neuronal network model of L5 in the ACC, revealing that the modification of inhibitory connectivity had the most profound effect on increased network activity. Thus, our combined experimental and modeling approach suggests that cortical disinhibition is a fundamental pathological modification associated with peripheral nerve damage. These changes at the cortical network level might therefore contribute to the neuropathic pain condition.


Subject(s)
Gyrus Cinguli/physiopathology , Neural Inhibition/physiology , Neuralgia/physiopathology , Peripheral Nerve Injuries/physiopathology , Sciatic Nerve/injuries , Animals , Disease Models, Animal , Male , Mice , Mice, Inbred C57BL , Neuralgia/etiology , Neurons/physiology , Pain Threshold/physiology , Peripheral Nerve Injuries/complications , Sciatic Nerve/physiopathology , Synaptic Transmission/physiology
9.
PLoS Comput Biol ; 10(6): e1003640, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24901935

ABSTRACT

Recent experiments revealed that the fruit fly Drosophila melanogaster has a dedicated mechanism for forgetting: blocking the G-protein Rac leads to slower and activating Rac to faster forgetting. This active form of forgetting lacks a satisfactory functional explanation. We investigated optimal decision making for an agent adapting to a stochastic environment where a stimulus may switch between being indicative of reward or punishment. Like Drosophila, an optimal agent shows forgetting with a rate that is linked to the time scale of changes in the environment. Moreover, to reduce the odds of missing future reward, an optimal agent may trade the risk of immediate pain for information gain and thus forget faster after aversive conditioning. A simple neuronal network reproduces these features. Our theory shows that forgetting in Drosophila appears as an optimal adaptive behavior in a changing environment. This is in line with the view that forgetting is adaptive rather than a consequence of limitations of the memory system.


Subject(s)
Drosophila melanogaster/physiology , Memory/physiology , Adaptation, Physiological , Adaptation, Psychological , Animals , Behavior, Animal/physiology , Computational Biology , Conditioning, Psychological , Decision Making/physiology , Environment , Learning/physiology , Models, Biological , Models, Psychological , Odorants , Reward , Stochastic Processes
10.
Nature ; 457(7233): 1137-41, 2009 Feb 26.
Article in English | MEDLINE | ID: mdl-19151696

ABSTRACT

The computational power of single neurons is greatly enhanced by active dendritic conductances that have a large influence on their spike activity. In cortical output neurons such as the large pyramidal cells of layer 5 (L5), activation of apical dendritic calcium channels leads to plateau potentials that increase the gain of the input/output function and switch the cell to burst-firing mode. The apical dendrites are innervated by local excitatory and inhibitory inputs as well as thalamic and corticocortical projections, which makes it a formidable task to predict how these inputs influence active dendritic properties in vivo. Here we investigate activity in populations of L5 pyramidal dendrites of the somatosensory cortex in awake and anaesthetized rats following sensory stimulation using a new fibre-optic method for recording dendritic calcium changes. We show that the strength of sensory stimulation is encoded in the combined dendritic calcium response of a local population of L5 pyramidal cells in a graded manner. The slope of the stimulus-response function was under the control of a particular subset of inhibitory neurons activated by synaptic inputs predominantly in L5. Recordings from single apical tuft dendrites in vitro showed that activity in L5 pyramidal neurons disynaptically coupled via interneurons directly blocks the initiation of dendritic calcium spikes in neighbouring pyramidal neurons. The results constitute a functional description of a cortical microcircuit in awake animals that relies on the active properties of L5 pyramidal dendrites and their very high sensitivity to inhibition. The microcircuit is organized so that local populations of apical dendrites can adaptively encode bottom-up sensory stimuli linearly across their full dynamic range.


Subject(s)
Dendrites/physiology , Interneurons/physiology , Somatosensory Cortex/cytology , Somatosensory Cortex/physiology , Anesthesia , Animals , Calcium/metabolism , Electric Stimulation , Excitatory Postsynaptic Potentials/physiology , Female , Models, Neurological , Rats , Rats, Wistar , Wakefulness/physiology
11.
J Neurosci ; 33(23): 9565-75, 2013 Jun 05.
Article in English | MEDLINE | ID: mdl-23739954

ABSTRACT

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.


Subject(s)
Action Potentials/physiology , Mental Recall/physiology , Neural Networks, Computer , Learning/physiology , Models, Neurological , Synapses/physiology
12.
Neurosci Biobehav Rev ; 157: 105508, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38097096

ABSTRACT

Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.


Subject(s)
Dreams , Imagination , Humans , Dreams/physiology , Imagination/physiology , Sleep , Brain , Sensation
13.
PNAS Nexus ; 3(9): pgae404, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39346625

ABSTRACT

Minimization of cortical prediction errors has been considered a key computational goal of the cerebral cortex underlying perception, action, and learning. However, it is still unclear how the cortex should form and use information about uncertainty in this process. Here, we formally derive neural dynamics that minimize prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their confidence (inverse expected uncertainty) in their predictions. In the resulting neuronal dynamics, the integration of bottom-up and top-down cortical streams is dynamically modulated based on confidence in accordance with the Bayesian principle. Moreover, the theory predicts the existence of cortical second-order errors, comparing confidence and actual performance. These errors are propagated through the cortical hierarchy alongside classical prediction errors and are used to learn the weights of synapses responsible for formulating confidence. We propose a detailed mapping of the theory to cortical circuitry, discuss entailed functional interpretations, and provide potential directions for experimental work.

14.
Neuron ; 112(10): 1531-1552, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38447578

ABSTRACT

How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.


Subject(s)
Brain , Consciousness , Consciousness/physiology , Humans , Brain/physiology , Models, Neurological , Neurons/physiology , Animals
15.
PLoS Comput Biol ; 8(9): e1002691, 2012.
Article in English | MEDLINE | ID: mdl-23028289

ABSTRACT

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.


Subject(s)
Action Potentials/physiology , Brain/physiology , Competitive Behavior/physiology , Decision Making/physiology , Game Theory , Models, Neurological , Nerve Net/physiology , Computer Simulation , Humans
16.
PLoS Comput Biol ; 7(6): e1002092, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21738460

ABSTRACT

In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.


Subject(s)
Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology , Algorithms , Animals , Computational Biology , Computer Simulation , Decision Making/physiology , Dogs , Markov Chains , Memory , Reward , Signal Transduction , Time Factors
17.
Elife ; 112022 04 25.
Article in English | MEDLINE | ID: mdl-35467527

ABSTRACT

In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.


Subject(s)
Neurons , Synapses , Learning/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology
18.
Elife ; 112022 04 06.
Article in English | MEDLINE | ID: mdl-35384841

ABSTRACT

Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.


Subject(s)
Dreams , Sleep, Slow-Wave , Animals , Sleep , Sleep, REM , Wakefulness
19.
Elife ; 102021 10 28.
Article in English | MEDLINE | ID: mdl-34709176

ABSTRACT

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called 'plasticity rules', is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition ­ that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural ­ and reasonable ­ tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on 'evolutionary algorithms'. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner ­ an example of reinforcement learning. Finally, in the third 'supervised learning' scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers 'learn' will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.


Subject(s)
Nerve Net , Neuronal Plasticity , Neurons/physiology , Animals , Humans , Models, Neurological
20.
Naunyn Schmiedebergs Arch Pharmacol ; 394(1): 127-135, 2021 01.
Article in English | MEDLINE | ID: mdl-32894324

ABSTRACT

Various disturbances of social behavior, such as autism, depression, or posttraumatic stress disorder, have been associated with an altered steroid hormone homeostasis and a dysregulation of the hypothalamus-pituitary-adrenal axis. A link between steroid hormone antagonists and the treatment of stress-related conditions has been suggested. We evaluated the effects of stress induction on social behavior in the three chambers and its potential reversibility upon specific steroid hormone antagonism in mice. C57BL/6 mice were stressed twice daily for 8 days by chronic swim testing. Social behavior was evaluated by measuring, first, the preference for sociability and, second, the preference for social novelty in the three-chamber approach before and after the chronic swim test. The reversibility of behavior upon stress induction was analyzed after applying steroid hormone antagonists targeting glucocorticoids with etomidate, mineralocorticoids with potassium canrenoate, and androgens with cyproterone acetate and metformin. In the chronic swim test, increased floating time from 0.8 ± 0.2 min up to 4.8 ± 0.25 min was detected (p < 0.01). In the three-chamber approach, increased preference for sociability and decreased preference for social novelty was detected pre- versus post-stress induction. These alterations of social behavior were barely affected by etomidate and potassium canrenoate, whereas the two androgen antagonists metformin and cyproterone acetate restored social behavior even beyond baseline conditions. The alteration of social behavior was better reversed by the androgen as compared with the glucocorticoid and mineralocorticoid antagonists. This suggests that social behavior is primarily controlled by androgen rather than by glucocorticoid or mineralocorticoid action. The stress-induced changes in preference for sociability are incompletely explained by steroid hormone action alone. As the best response was related to metformin, an effect via glucose levels might confound the results and should be subject to future research.


Subject(s)
Androgen Antagonists/pharmacology , Mineralocorticoid Receptor Antagonists/pharmacology , Receptors, Glucocorticoid/antagonists & inhibitors , Social Behavior , Stress, Psychological , Animals , Behavior, Animal/drug effects , Canrenoic Acid/pharmacology , Cyproterone Acetate/pharmacology , Etomidate/pharmacology , Female , Hormones/physiology , Hypnotics and Sedatives/pharmacology , Metformin/pharmacology , Mice, Inbred C57BL
SELECTION OF CITATIONS
SEARCH DETAIL