Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Article in English | MEDLINE | ID: mdl-35324886

ABSTRACT

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Subject(s)
Models, Neurological , Neurons , Action Potentials , Brain , Computer Simulation , Neural Networks, Computer
2.
PLoS Comput Biol ; 16(7): e1008087, 2020 07.
Article in English | MEDLINE | ID: mdl-32701953

ABSTRACT

The dynamics and the sharp onset of action potential (AP) generation have recently been the subject of intense experimental and theoretical investigations. According to the resistive coupling theory, an electrotonic interplay between the site of AP initiation in the axon and the somato-dendritic load determines the AP waveform. This phenomenon not only alters the shape of APs recorded at the soma, but also determines the dynamics of excitability across a variety of time scales. Supporting this statement, here we generalize a previous numerical study and extend it to the quantification of the input-output gain of the neuronal dynamical response. We consider three classes of multicompartmental mathematical models, ranging from ball-and-stick simplified descriptions of neuronal excitability to 3D-reconstructed biophysical models of excitatory neurons of rodent and human cortical tissue. For each model, we demonstrate that increasing the distance between the axonal site of AP initiation and the soma markedly increases the bandwidth of neuronal response properties. We finally consider the Liquid State Machine paradigm, exploring the impact of altering the site of AP initiation at the level of a neuronal population, and demonstrate that an optimal distance exists to boost the computational performance of the network in a simple classification task.


Subject(s)
Action Potentials , Axon Initial Segment/physiology , Axons/physiology , Neurons/physiology , Algorithms , Animals , Cerebral Cortex/pathology , Computational Biology , Computer Simulation , Dendrites/physiology , Humans , Imaging, Three-Dimensional , Linear Models , Machine Learning , Models, Neurological , Neocortex/physiology , Potassium Channels/physiology , Rats
3.
Cereb Cortex ; 30(3): 952-968, 2020 03 14.
Article in English | MEDLINE | ID: mdl-31403679

ABSTRACT

Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.


Subject(s)
Memory/physiology , Models, Neurological , Neural Networks, Computer , Neuronal Plasticity , Neurons/physiology , Action Potentials , Humans , Learning/physiology
4.
J Neurosci ; 37(35): 8511-8523, 2017 08 30.
Article in English | MEDLINE | ID: mdl-28760861

ABSTRACT

Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code.SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns.


Subject(s)
Action Potentials/physiology , Feedback, Physiological/physiology , Models, Neurological , Nerve Net/physiology , Neural Inhibition/physiology , Neuronal Plasticity/physiology , Pyramidal Cells/physiology , Computer Simulation , Synaptic Transmission/physiology
5.
PLoS Comput Biol ; 11(11): e1004485, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26545099

ABSTRACT

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Action Potentials/physiology , Bayes Theorem , Computational Biology , Computer Simulation
7.
PLoS Comput Biol ; 10(10): e1003859, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25340749

ABSTRACT

It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through "neural sampling", i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.


Subject(s)
Action Potentials/physiology , Models, Neurological , Models, Statistical , Neurons/physiology , Animals , Computational Biology , Markov Chains , Monte Carlo Method , Motor Cortex/cytology , Motor Cortex/physiology , Rats
8.
Cereb Cortex ; 24(3): 677-90, 2014 Mar.
Article in English | MEDLINE | ID: mdl-23146969

ABSTRACT

This paper addresses the question how generic microcircuits of neurons in different parts of the cortex can attain and maintain different computational specializations. We show that if stochastic variations in the dynamics of local microcircuits are correlated with signals related to functional improvements of the brain (e.g. in the control of behavior), the computational operation of these microcircuits can become optimized for specific tasks such as the generation of specific periodic signals and task-dependent routing of information. Furthermore, we show that working memory can autonomously emerge through reward-modulated Hebbian learning, if needed for specific tasks. Altogether, our results suggest that reward-modulated synaptic plasticity can not only optimize the network parameters for specific computational tasks, but also initiate a functional rewiring that re-programs microcircuits, thereby generating diverse computational functions in different generic cortical microcircuits. On a more general level, this work provides a new perspective for a standard model for computations in generic cortical microcircuits (liquid computing model). It shows that the arguably most problematic assumption of this model, the postulate of a teacher that trains neural readouts through supervised learning, can be eliminated. We show that generic networks of neurons can learn numerous biologically relevant computations through trial and error.


Subject(s)
Computer Simulation , Learning/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Reward , Action Potentials/physiology , Humans
9.
Sci Rep ; 14(1): 8557, 2024 04 12.
Article in English | MEDLINE | ID: mdl-38609429

ABSTRACT

Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.


Subject(s)
Brain , Learning , Neuronal Plasticity , Drugs, Generic , Neural Networks, Computer
10.
Nanotechnology ; 24(38): 384010, 2013 Sep 27.
Article in English | MEDLINE | ID: mdl-23999381

ABSTRACT

Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.


Subject(s)
Electronics/instrumentation , Models, Neurological , Nanotechnology/instrumentation , Neural Networks, Computer , Synapses , Equipment Design
11.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10346-10357, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37021892

ABSTRACT

Image restoration under adverse weather conditions has been of significant interest for various computer vision applications. Recent successful methods rely on the current progress in deep neural network architectural designs (e.g., with vision transformers). Motivated by the recent progress achieved with state-of-the-art conditional generative models, we present a novel patch-based image restoration algorithm based on denoising diffusion probabilistic models. Our patch-based diffusion modeling approach enables size-agnostic image restoration by using a guided denoising process with smoothed noise estimates across overlapping patches during inference. We empirically evaluate our model on benchmark datasets for image desnowing, combined deraining and dehazing, and raindrop removal. We demonstrate our approach to achieve state-of-the-art performances on both weather-specific and multi-weather image restoration, and experimentally show strong generalization to real-world test images.


Subject(s)
Algorithms , Neural Networks, Computer , Signal-To-Noise Ratio
12.
Front Neurosci ; 17: 1276706, 2023.
Article in English | MEDLINE | ID: mdl-38357522

ABSTRACT

The unique characteristics of neocortical pyramidal neurons are thought to be crucial for many aspects of information processing and learning in the brain. Experimental data suggests that their segregation into two distinct compartments, the basal dendrites close to the soma and the apical dendrites branching out from the thick apical dendritic tuft, plays an essential role in cortical organization. A recent hypothesis states that layer 5 pyramidal cells associate top-down contextual information arriving at their apical tuft with features of the sensory input that predominantly arrives at their basal dendrites. It has however remained unclear whether such context association could be established by synaptic plasticity processes. In this work, we formalize the objective of such context association learning through a mathematical loss function and derive a plasticity rule for apical synapses that optimizes this loss. The resulting plasticity rule utilizes information that is available either locally at the synapse, through branch-local NMDA spikes, or through global Ca2+events, both of which have been observed experimentally in layer 5 pyramidal cells. We show in computer simulations that the plasticity rule enables pyramidal cells to associate top-down contextual input patterns with high somatic activity. Furthermore, it enables networks of pyramidal neuron models to perform context-dependent tasks and enables continual learning by allocating new dendritic branches to novel contexts.

13.
Article in English | MEDLINE | ID: mdl-38113154

ABSTRACT

Spiking neural networks (SNNs) are the basis for many energy-efficient neuromorphic hardware systems. While there has been substantial progress in SNN research, artificial SNNs still lack many capabilities of their biological counterparts. In biological neural systems, memory is a key component that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning in artificial and SNNs. Here, we propose that Hebbian plasticity is fundamental for computations in biological and artificial spiking neural systems. We introduce a novel memory-augmented SNN architecture that is enriched by Hebbian synaptic plasticity. We show that Hebbian enrichment renders SNNs surprisingly versatile in terms of their computational as well as learning capabilities. It improves their abilities for out-of-distribution generalization, one-shot learning, cross-modal generative association, language processing, and reward-based learning. This suggests that powerful cognitive neuromorphic systems can be built based on this principle.

14.
J Neurosci ; 31(30): 10787-802, 2011 Jul 27.
Article in English | MEDLINE | ID: mdl-21795531

ABSTRACT

It has been conjectured that nonlinear processing in dendritic branches endows individual neurons with the capability to perform complex computational operations that are needed to solve for example the binding problem. However, it is not clear how single neurons could acquire such functionality in a self-organized manner, because most theoretical studies of synaptic plasticity and learning concentrate on neuron models without nonlinear dendritic properties. In the meantime, a complex picture of information processing with dendritic spikes and a variety of plasticity mechanisms in single neurons has emerged from experiments. In particular, new experimental data on dendritic branch strength potentiation in rat hippocampus have not yet been incorporated into such models. In this article, we investigate how experimentally observed plasticity mechanisms, such as depolarization-dependent spike-timing-dependent plasticity and branch-strength potentiation, could be integrated to self-organize nonlinear neural computations with dendritic spikes. We provide a mathematical proof that, in a simplified setup, these plasticity mechanisms induce a competition between dendritic branches, a novel concept in the analysis of single neuron adaptivity. We show via computer simulations that such dendritic competition enables a single neuron to become member of several neuronal ensembles and to acquire nonlinear computational capabilities, such as the capability to bind multiple input features. Hence, our results suggest that nonlinear neural computation may self-organize in single neurons through the interaction of local synaptic and dendritic plasticity mechanisms.


Subject(s)
Computer Simulation , Dendrites/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/cytology , Nonlinear Dynamics , Action Potentials , Adaptation, Physiological , Animals , Neurons/physiology
15.
Front Neurosci ; 16: 838054, 2022.
Article in English | MEDLINE | ID: mdl-35495034

ABSTRACT

Spike-based neuromorphic hardware has great potential for low-energy brain-machine interfaces, leading to a novel paradigm for neuroprosthetics where spiking neurons in silicon read out and control activity of brain circuits. Neuromorphic processors can receive rich information about brain activity from both spikes and local field potentials (LFPs) recorded by implanted neural probes. However, it was unclear whether spiking neural networks (SNNs) implemented on such devices can effectively process that information. Here, we demonstrate that SNNs can be trained to classify whisker deflections of different amplitudes from evoked responses in a single barrel of the rat somatosensory cortex. We show that the classification performance is comparable or even superior to state-of-the-art machine learning approaches. We find that SNNs are rather insensitive to recorded signal type: both multi-unit spiking activity and LFPs yield similar results, where LFPs from cortical layers III and IV seem better suited than those of deep layers. In addition, no hand-crafted features need to be extracted from the data-multi-unit activity can directly be fed into these networks and a simple event-encoding of LFPs is sufficient for good performance. Furthermore, we find that the performance of SNNs is insensitive to the network state-their performance is similar during UP and DOWN states.

16.
Neuroscience ; 489: 275-289, 2022 05 01.
Article in English | MEDLINE | ID: mdl-34656706

ABSTRACT

In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.


Subject(s)
Dendrites , Models, Neurological , Dendrites/physiology , Machine Learning , Neuronal Plasticity/physiology , Neurons/physiology
17.
J Neurosci ; 30(25): 8400-10, 2010 Jun 23.
Article in English | MEDLINE | ID: mdl-20573887

ABSTRACT

It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.


Subject(s)
Learning/physiology , Nerve Net/physiology , Neural Networks, Computer , Neuronal Plasticity/physiology , Reward , Action Potentials/physiology , Cerebral Cortex/physiology , Computer Simulation , Models, Neurological , Motor Neurons/physiology , Synapses/physiology , Synaptic Transmission
18.
PLoS Comput Biol ; 6(8)2010 Aug 19.
Article in English | MEDLINE | ID: mdl-20808883

ABSTRACT

Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.


Subject(s)
Models, Biological , Psychomotor Performance/physiology , Reinforcement, Psychology , Algorithms , Animals , Brain/physiology , Computer Simulation , Humans , Learning/physiology , Neural Networks, Computer , Rats , Visual Perception/physiology
19.
Elife ; 102021 07 26.
Article in English | MEDLINE | ID: mdl-34310281

ABSTRACT

For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neocortex/physiology , Nerve Net/physiology , Neurons/physiology , Adaptation, Physiological , Computers, Molecular , Humans , Neural Networks, Computer
20.
Neural Comput ; 22(5): 1272-311, 2010 May.
Article in English | MEDLINE | ID: mdl-20028227

ABSTRACT

Reservoir computing (RC) systems are powerful models for online computations on input sequences. They consist of a memoryless readout neuron that is trained on top of a randomly connected recurrent neural network. RC systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work indicated a fundamental difference in the behavior of these two implementations of the RC idea. The performance of an RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons, such clear dependency has not been observed. In this letter, we address this apparent dichotomy by investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks. Our analyses are based on a novel estimation of the Lyapunov exponent of the network dynamics with the help of branching process theory, rank measures that estimate the kernel quality and generalization capabilities of recurrent networks, and a novel mean field predictor for computational performance. These analyses reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to differences in the integration of information over short and long timescales. This explains the decreased computational performance observed in binary circuits that are densely connected. The mean field predictor is also used to bound the memory function of recurrent circuits of binary neurons.


Subject(s)
Memory , Neural Networks, Computer , Neurons , Algorithms , Animals , Computer Simulation , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL