Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Curr Opin Neurobiol ; 83: 102809, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37980802

RESUMEN

Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism. The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review, we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.


Asunto(s)
Memoria a Corto Plazo , Plasticidad Neuronal
2.
IEEE Trans Biomed Circuits Syst ; 17(5): 1153-1165, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37390002

RESUMEN

The memristor has been extensively used to facilitate the synaptic online learning of brain-inspired spiking neural networks (SNNs). However, the current memristor-based work can not support the widely used yet sophisticated trace-based learning rules, including the trace-based Spike-Timing-Dependent Plasticity (STDP) and the Bayesian Confidence Propagation Neural Network (BCPNN) learning rules. This paper proposes a learning engine to implement trace-based online learning, consisting of memristor-based blocks and analog computing blocks. The memristor is used to mimic the synaptic trace dynamics by exploiting the nonlinear physical property of the device. The analog computing blocks are used for the addition, multiplication, logarithmic and integral operations. By organizing these building blocks, a reconfigurable learning engine is architected and realized to simulate the STDP and BCPNN online learning rules, using memristors and 180 nm analog CMOS technology. The results show that the proposed learning engine can achieve energy consumption of 10.61 pJ and 51.49 pJ per synaptic update for the STDP and BCPNN learning rules, respectively, with a 147.03× and 93.61× reduction compared to the 180 nm ASIC counterparts, and also a 9.39× and 5.63× reduction compared to the 40 nm ASIC counterparts. Compared with the state-of-the-art work of Loihi and eBrainII, the learning engine can reduce the energy per synaptic update by 11.31× and 13.13× for trace-based STDP and BCPNN learning rules, respectively.


Asunto(s)
Educación a Distancia , Teorema de Bayes , Redes Neurales de la Computación , Aprendizaje , Encéfalo
3.
eNeuro ; 2022 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-35803714

RESUMEN

Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or" semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.Significance StatementRemembering single episodes is a fundamental attribute of cognition. Difficulties recollecting contextual information is a key sign of episodic memory loss or semantization. Behavioral studies demonstrate that semantization of episodic memory can occur rapidly, yet the neural mechanisms underlying this effect are insufficiently investigated. In line with recent behavioral findings, we show that multiple stimulus exposures in different contexts may advance item-context decoupling. We suggest a Bayesian-Hebbian synaptic plasticity hypothesis of memory semantization and further show that a transient modulation of plasticity during salient events may disrupt the decontextualization process by strengthening memory traces, and thus, enhancing preferential retention. The proposed cortical network-of-networks model thus bridges micro and mesoscale synaptic effects with network dynamics and behavior.

4.
Front Neurosci ; 15: 750458, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34955716

RESUMEN

The Bayesian Confidence Propagation Neural Network (BCPNN) has been implemented in a way that allows mapping to neural and synaptic processes in the human cortexandhas been used extensively in detailed spiking models of cortical associative memory function and recently also for machine learning applications. In conventional digital implementations of BCPNN, the von Neumann bottleneck is a major challenge with synaptic storage and access to it as the dominant cost. The memristor is a non-volatile device ideal for artificial synapses that fuses computation and storage and thus fundamentally overcomes the von Neumann bottleneck. While the implementation of other neural networks like Spiking Neural Network (SNN) and even Convolutional Neural Network (CNN) on memristor has been studied, the implementation of BCPNN has not. In this paper, the BCPNN learning rule is mapped to a memristor model and implemented with a memristor-based architecture. The implementation of the BCPNN learning rule is a mixed-signal design with the main computation and storage happening in the analog domain. In particular, the nonlinear dopant drift phenomenon of the memristor is exploited to simulate the exponential decay of the synaptic state variables in the BCPNN learning rule. The consistency between the memristor-based solution and the BCPNN learning rule is simulated and verified in Matlab, with a correlation coefficient as high as 0.99. The analog circuit is designed and implemented in the SPICE simulation environment, demonstrating a good emulation effect for the BCPNN learning rule with a correlation coefficient as high as 0.98. This work focuses on demonstrating the feasibility of mapping the BCPNN learning rule to in-circuit computation in memristor. The feasibility of the memristor-based implementation is evaluated and validated in the paper, to pave the way for a more efficient BCPNN implementation, toward a real-time brain emulation engine.

5.
Front Neurosci ; 14: 878, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32982673

RESUMEN

Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost all of these computers are based on the von Neumann architecture that separates storage and computation. In all these solutions, memory access is the dominant cost even for highly customized computation and memory architecture, such as ASICs. In this paper, we propose an optimization technique that can make the BCPNN simulation memory access friendly by avoiding a dual-access pattern. The BCPNN synaptic traces and weights are organized as matrices accessed both row-wise and column-wise. Accessing data stored in DRAM with a dual-access pattern is extremely expensive. A post-synaptic history buffer and an approximation function thus are introduced to eliminate the troublesome column update. The error analysis combining theoretical analysis and experiments suggests that the probability of introducing intolerable errors by such optimization can be bounded to a very small number, which makes it almost negligible. Derivation and validation of such a bound is the core contribution of this paper. Experiments on a GPU platform shows that compared to the previously reported baseline simulation strategy, the proposed optimization technique reduces the storage requirement by 33%, the global memory access demand by more than 27% and DRAM access rate by more than 5%; the latency of updating synaptic traces decreases by roughly 50%. Compared with the other similar optimization technique reported in the literature, our method clearly shows considerably better results. Although the BCPNN is used as the targeted neural network model, the proposed optimization method can be applied to other artificial neural network models based on a Hebbian learning rule.

6.
eNeuro ; 7(2)2020.
Artículo en Inglés | MEDLINE | ID: mdl-32127347

RESUMEN

Working memory (WM) is a key component of human memory and cognition. Computational models have been used to study the underlying neural mechanisms, but neglected the important role of short-term memory (STM) and long-term memory (LTM) interactions for WM. Here, we investigate these using a novel multiarea spiking neural network model of prefrontal cortex (PFC) and two parietotemporal cortical areas based on macaque data. We propose a WM indexing theory that explains how PFC could associate, maintain, and update multimodal LTM representations. Our simulations demonstrate how simultaneous, brief multimodal memory cues could build a temporary joint memory representation as an "index" in PFC by means of fast Hebbian synaptic plasticity. This index can then reactivate spontaneously and thereby also the associated LTM representations. Cueing one LTM item rapidly pattern completes the associated uncued item via PFC. The PFC-STM network updates flexibly as new stimuli arrive, thereby gradually overwriting older representations.


Asunto(s)
Memoria a Corto Plazo , Modelos Neurológicos , Cognición , Humanos , Redes Neurales de la Computación , Corteza Prefrontal
7.
J Comput Neurosci ; 47(2-3): 223-230, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31502234

RESUMEN

We present an electrophysiological model of double bouquet cells and integrate them into an established cortical columnar microcircuit model that has previously been used as a spiking attractor model for memory. Learning in that model relies on a Hebbian-Bayesian learning rule to condition recurrent connectivity between pyramidal cells. We here demonstrate that the inclusion of a biophysically plausible double bouquet cell model can solve earlier concerns about learning rules that simultaneously learn excitation and inhibition and might thus violate Dale's principle. We show that learning ability and resulting effective connectivity between functional columns of previous network models is preserved when pyramidal synapses onto double bouquet cells are plastic under the same Hebbian-Bayesian learning rule. The proposed architecture draws on experimental evidence on double bouquet cells and effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. We thus show that the resulting change to the microcircuit architecture improves the model's biological plausibility without otherwise impacting the model's spiking activity, basic operation, and learning abilities.


Asunto(s)
Aprendizaje por Asociación/fisiología , Corteza Cerebral/fisiología , Memoria/fisiología , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Teorema de Bayes , Simulación por Computador , Humanos , Modelos Neurológicos
8.
PLoS One ; 14(8): e0220161, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31369571

RESUMEN

From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.


Asunto(s)
Aprendizaje por Asociación/fisiología , Teorema de Bayes , Memoria/fisiología , Recuerdo Mental , Modelos Neurológicos , Redes Neurales de la Computación , Potenciales de Acción , Humanos
9.
Cognition ; 178: 37-49, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29763790

RESUMEN

The olfactory sense is a particularly challenging domain for cognitive science investigations of perception, memory, and language. Although many studies show that odors often are difficult to describe verbally, little is known about the associations between olfactory percepts and the words that describe them. Quantitative models of how odor experiences are described in natural language are therefore needed to understand how odors are perceived and communicated. In this study, we develop a computational method to characterize the olfaction-related semantic content of words in a large text corpus of internet sites in English. We introduce two new metrics: olfactory association index (OAI, how strongly a word is associated with olfaction) and olfactory specificity index (OSI, how specific a word is in its description of odors). We validate the OAI and OSI metrics using psychophysical datasets by showing that terms with high OAI have high ratings of perceived olfactory association and are used to describe highly familiar odors. In contrast, terms with high OSI have high inter-individual consistency in how they are applied to odors. Finally, we analyze Dravnieks's (1985) dataset of odor ratings in terms of OAI and OSI. This analysis reveals that terms that are used broadly (applied often but with moderate ratings) tend to be olfaction-unrelated and abstract (e.g., "heavy" or "light"; low OAI and low OSI) while descriptors that are used selectively (applied seldom but with high ratings) tend to be olfaction-related (e.g., "vanilla" or "licorice"; high OAI). Thus, OAI and OSI provide behaviorally meaningful information about olfactory language. These statistical tools are useful for future studies of olfactory perception and cognition, and might help integrate research on odor perception, neuroimaging, and corpus-based linguistic models of semantic organization.


Asunto(s)
Lingüística , Odorantes , Percepción Olfatoria , Psicofísica , Humanos , Reconocimiento en Psicología , Semántica , Olfato
10.
J Neurosci ; 37(1): 83-96, 2017 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-28053032

RESUMEN

A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT: Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field.


Asunto(s)
Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Simulación por Computador , Fenómenos Electrofisiológicos/fisiología , Humanos , Recuerdo Mental , Redes Neurales de la Computación , Células Piramidales/fisiología , Aprendizaje Verbal
11.
Front Neural Circuits ; 10: 53, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27493625

RESUMEN

The brain enables animals to behaviorally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviors are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG) that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1) and the indirect (denoted D2) pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE), which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and post-synaptic activity, receptor type, and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson's disease (PD) by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.


Asunto(s)
Ganglios Basales/fisiología , Redes Neurales de la Computación , Vías Nerviosas/fisiología , Plasticidad Neuronal/fisiología , Enfermedad de Parkinson/fisiopatología , Refuerzo en Psicología , Recompensa , Humanos
12.
PLoS Comput Biol ; 12(5): e1004954, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27213810

RESUMEN

Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison.


Asunto(s)
Aprendizaje/fisiología , Modelos Neurológicos , Potenciales de Acción/fisiología , Animales , Teorema de Bayes , Biología Computacional , Humanos , N-Metilaspartato/metabolismo , Neocórtex/citología , Neocórtex/fisiología , Redes Neurales de la Computación , Plasticidad Neuronal/fisiología , Ácido alfa-Amino-3-hidroxi-5-metil-4-isoxazol Propiónico/metabolismo
13.
Front Neuroanat ; 10: 37, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27092061

RESUMEN

SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

14.
PLoS Comput Biol ; 11(12): e1004584, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26657024

RESUMEN

Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.


Asunto(s)
Potenciales de Acción/fisiología , Mapeo Encefálico/métodos , Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Neuronas/fisiología , Simulación por Computador , Campos Electromagnéticos , Humanos , Potenciales de la Membrana/fisiología , Transmisión Sináptica/fisiología
15.
Artículo en Inglés | MEDLINE | ID: mdl-26528143

RESUMEN

Although the importance of long-range connections for cortical information processing has been acknowledged for a long time, most studies focused on the long-range interactions between excitatory cortical neurons. Inhibitory interneurons play an important role in cortical computation and have thus far been studied mainly with respect to their local synaptic interactions within the cortical microcircuitry. A recent study showed that long-range excitatory connections onto Martinotti cells (MC) mediate surround suppression. Here we have extended our previously reported attractor network of pyramidal cells (PC) and MC by introducing long-range connections targeting MC. We have demonstrated how the network with Martinotti cell-mediated long-range inhibition gives rise to surround suppression and also promotes saliency of locations at which simple non-uniformities in the stimulus field are introduced. Furthermore, our analysis suggests that the presynaptic dynamics of MC is only ancillary to its orientation tuning property in enabling the network with saliency detection. Lastly, we have also implemented a disinhibitory pathway mediated by another interneuron type (VIP interneurons), which inhibits MC and abolishes surround suppression.


Asunto(s)
Corteza Cerebral/fisiología , Interneuronas/fisiología , Inhibición Neural/fisiología , Redes Neurales de la Computación , Células Piramidales/fisiología
16.
Neuron ; 88(1): 33-46, 2015 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-26447571

RESUMEN

A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information.


Asunto(s)
Encéfalo/fisiología , Memoria a Largo Plazo/fisiología , Memoria a Corto Plazo/fisiología , Lóbulo Frontal/fisiología , Humanos , Plasticidad Neuronal/fisiología , Lóbulo Parietal/fisiología
17.
Front Neurosci ; 9: 2, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25657618

RESUMEN

The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

18.
PLoS One ; 9(10): e108590, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25303102

RESUMEN

Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.


Asunto(s)
Simulación por Computador , Sistemas de Computación , Redes Neurales de la Computación , Computadores , Diseño de Equipo , Modelos Neurológicos , Neuronas/fisiología , Programas Informáticos
19.
Artículo en Inglés | MEDLINE | ID: mdl-25071536

RESUMEN

Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions-called systems consolidation-can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia (RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive neocortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-state framework that includes the prefrontal cortex (PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia (AA) after simulated hippocampal lesioning; furthermore the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new long-term memories-similar to the effects of benzodiazepines on memory.

20.
Artículo en Inglés | MEDLINE | ID: mdl-24782758

RESUMEN

Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...