Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 8557, 2024 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609429

RESUMO

Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.


Assuntos
Encéfalo , Aprendizagem , Plasticidade Neuronal , Medicamentos Genéricos , Redes Neurais de Computação
2.
Artigo em Inglês | MEDLINE | ID: mdl-38113154

RESUMO

Spiking neural networks (SNNs) are the basis for many energy-efficient neuromorphic hardware systems. While there has been substantial progress in SNN research, artificial SNNs still lack many capabilities of their biological counterparts. In biological neural systems, memory is a key component that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning in artificial and SNNs. Here, we propose that Hebbian plasticity is fundamental for computations in biological and artificial spiking neural systems. We introduce a novel memory-augmented SNN architecture that is enriched by Hebbian synaptic plasticity. We show that Hebbian enrichment renders SNNs surprisingly versatile in terms of their computational as well as learning capabilities. It improves their abilities for out-of-distribution generalization, one-shot learning, cross-modal generative association, language processing, and reward-based learning. This suggests that powerful cognitive neuromorphic systems can be built based on this principle.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10346-10357, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37021892

RESUMO

Image restoration under adverse weather conditions has been of significant interest for various computer vision applications. Recent successful methods rely on the current progress in deep neural network architectural designs (e.g., with vision transformers). Motivated by the recent progress achieved with state-of-the-art conditional generative models, we present a novel patch-based image restoration algorithm based on denoising diffusion probabilistic models. Our patch-based diffusion modeling approach enables size-agnostic image restoration by using a guided denoising process with smoothed noise estimates across overlapping patches during inference. We empirically evaluate our model on benchmark datasets for image desnowing, combined deraining and dehazing, and raindrop removal. We demonstrate our approach to achieve state-of-the-art performances on both weather-specific and multi-weather image restoration, and experimentally show strong generalization to real-world test images.


Assuntos
Algoritmos , Redes Neurais de Computação , Razão Sinal-Ruído
4.
Front Neurosci ; 17: 1276706, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38357522

RESUMO

The unique characteristics of neocortical pyramidal neurons are thought to be crucial for many aspects of information processing and learning in the brain. Experimental data suggests that their segregation into two distinct compartments, the basal dendrites close to the soma and the apical dendrites branching out from the thick apical dendritic tuft, plays an essential role in cortical organization. A recent hypothesis states that layer 5 pyramidal cells associate top-down contextual information arriving at their apical tuft with features of the sensory input that predominantly arrives at their basal dendrites. It has however remained unclear whether such context association could be established by synaptic plasticity processes. In this work, we formalize the objective of such context association learning through a mathematical loss function and derive a plasticity rule for apical synapses that optimizes this loss. The resulting plasticity rule utilizes information that is available either locally at the synapse, through branch-local NMDA spikes, or through global Ca2+events, both of which have been observed experimentally in layer 5 pyramidal cells. We show in computer simulations that the plasticity rule enables pyramidal cells to associate top-down contextual input patterns with high somatic activity. Furthermore, it enables networks of pyramidal neuron models to perform context-dependent tasks and enables continual learning by allocating new dendritic branches to novel contexts.

5.
Front Neurosci ; 16: 838054, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35495034

RESUMO

Spike-based neuromorphic hardware has great potential for low-energy brain-machine interfaces, leading to a novel paradigm for neuroprosthetics where spiking neurons in silicon read out and control activity of brain circuits. Neuromorphic processors can receive rich information about brain activity from both spikes and local field potentials (LFPs) recorded by implanted neural probes. However, it was unclear whether spiking neural networks (SNNs) implemented on such devices can effectively process that information. Here, we demonstrate that SNNs can be trained to classify whisker deflections of different amplitudes from evoked responses in a single barrel of the rat somatosensory cortex. We show that the classification performance is comparable or even superior to state-of-the-art machine learning approaches. We find that SNNs are rather insensitive to recorded signal type: both multi-unit spiking activity and LFPs yield similar results, where LFPs from cortical layers III and IV seem better suited than those of deep layers. In addition, no hand-crafted features need to be extracted from the data-multi-unit activity can directly be fed into these networks and a simple event-encoding of LFPs is sufficient for good performance. Furthermore, we find that the performance of SNNs is insensitive to the network state-their performance is similar during UP and DOWN states.

6.
PLoS Comput Biol ; 18(3): e1009753, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35324886

RESUMO

Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação , Encéfalo , Simulação por Computador , Redes Neurais de Computação
7.
Neuroscience ; 489: 275-289, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-34656706

RESUMO

In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.


Assuntos
Dendritos , Modelos Neurológicos , Dendritos/fisiologia , Aprendizado de Máquina , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia
8.
Elife ; 102021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34310281

RESUMO

For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neocórtex/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Adaptação Fisiológica , Computadores Moleculares , Humanos , Redes Neurais de Computação
9.
Front Comput Neurosci ; 14: 57, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32848681

RESUMO

The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks.

10.
Nat Commun ; 11(1): 3625, 2020 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-32681001

RESUMO

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Recompensa , Potenciais de Ação/fisiologia , Animais , Encéfalo/citologia , Aprendizado Profundo , Humanos , Camundongos , Plasticidade Neuronal/fisiologia
11.
PLoS Comput Biol ; 16(7): e1008087, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32701953

RESUMO

The dynamics and the sharp onset of action potential (AP) generation have recently been the subject of intense experimental and theoretical investigations. According to the resistive coupling theory, an electrotonic interplay between the site of AP initiation in the axon and the somato-dendritic load determines the AP waveform. This phenomenon not only alters the shape of APs recorded at the soma, but also determines the dynamics of excitability across a variety of time scales. Supporting this statement, here we generalize a previous numerical study and extend it to the quantification of the input-output gain of the neuronal dynamical response. We consider three classes of multicompartmental mathematical models, ranging from ball-and-stick simplified descriptions of neuronal excitability to 3D-reconstructed biophysical models of excitatory neurons of rodent and human cortical tissue. For each model, we demonstrate that increasing the distance between the axonal site of AP initiation and the soma markedly increases the bandwidth of neuronal response properties. We finally consider the Liquid State Machine paradigm, exploring the impact of altering the site of AP initiation at the level of a neuronal population, and demonstrate that an optimal distance exists to boost the computational performance of the network in a simple classification task.


Assuntos
Potenciais de Ação , Segmento Inicial do Axônio/fisiologia , Axônios/fisiologia , Neurônios/fisiologia , Algoritmos , Animais , Córtex Cerebral/patologia , Biologia Computacional , Simulação por Computador , Dendritos/fisiologia , Humanos , Imageamento Tridimensional , Modelos Lineares , Aprendizado de Máquina , Modelos Neurológicos , Neocórtex/fisiologia , Canais de Potássio/fisiologia , Ratos
12.
eNeuro ; 7(3)2020.
Artigo em Inglês | MEDLINE | ID: mdl-32381648

RESUMO

Humans can reason at an abstract level and structure information into abstract categories, but the underlying neural processes have remained unknown. Recent experimental data provide the hint that this is likely to involve specific subareas of the brain from which structural information can be decoded. Based on this data, we introduce the concept of assembly projections, a general principle for attaching structural information to content in generic networks of spiking neurons. According to the assembly projections principle, structure-encoding assemblies emerge and are dynamically attached to content representations through Hebbian plasticity mechanisms. This model provides the basis for explaining a number of experimental data and provides a basis for modeling abstract computational operations of the brain.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Encéfalo , Humanos , Neurônios
13.
Cereb Cortex ; 30(3): 952-968, 2020 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-31403679

RESUMO

Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.


Assuntos
Memória/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Plasticidade Neuronal , Neurônios/fisiologia , Potenciais de Ação , Humanos , Aprendizagem/fisiologia
14.
Front Neurorobot ; 13: 81, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31632262

RESUMO

The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.

15.
IEEE Trans Biomed Circuits Syst ; 13(3): 579-591, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30932847

RESUMO

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.


Assuntos
Encéfalo/fisiologia , Aprendizado de Máquina , Modelos Neurológicos , Redes Neurais de Computação , Software , Sinapses/fisiologia , Humanos
16.
Front Neurosci ; 12: 840, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30505263

RESUMO

The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.

17.
eNeuro ; 5(2)2018.
Artigo em Inglês | MEDLINE | ID: mdl-29696150

RESUMO

Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.


Assuntos
Conectoma , Modelos Teóricos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Recompensa , Sinapses/fisiologia , Animais , Simulação por Computador , Humanos
18.
J Neurosci ; 37(35): 8511-8523, 2017 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-28760861

RESUMO

Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code.SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns.


Assuntos
Potenciais de Ação/fisiologia , Retroalimentação Fisiológica/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Plasticidade Neuronal/fisiologia , Células Piramidais/fisiologia , Simulação por Computador , Transmissão Sináptica/fisiologia
19.
Nat Commun ; 7: 12611, 2016 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-27681181

RESUMO

In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying clusters that underlie incoming data by supporting the capability of reversible unsupervised learning. The potential of this work is showcased through the demonstration of successful learning in the presence of corrupted input data and probabilistic neurons, thus paving the way towards robust big-data processors.

20.
PLoS Comput Biol ; 11(11): e1004485, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26545099

RESUMO

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Teorema de Bayes , Biologia Computacional , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...