Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Biol Cybern ; 117(4-5): 345-361, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37589728

RESUMO

The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.


Assuntos
Algoritmos , Redes Neurais de Computação , Encéfalo , Neurônios/fisiologia , Retroalimentação
2.
Science ; 384(6693): 338-343, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38635709

RESUMO

The computational capabilities of neuronal networks are fundamentally constrained by their specific connectivity. Previous studies of cortical connectivity have mostly been carried out in rodents; whether the principles established therein also apply to the evolutionarily expanded human cortex is unclear. We studied network properties within the human temporal cortex using samples obtained from brain surgery. We analyzed multineuron patch-clamp recordings in layer 2-3 pyramidal neurons and identified substantial differences compared with rodents. Reciprocity showed random distribution, synaptic strength was independent from connection probability, and connectivity of the supragranular temporal cortex followed a directed and mostly acyclic graph topology. Application of these principles in neuronal models increased dimensionality of network dynamics, suggesting a critical role for cortical computation.


Assuntos
Rede Nervosa , Células Piramidais , Sinapses , Lobo Temporal , Animais , Humanos , Rede Nervosa/fisiologia , Rede Nervosa/ultraestrutura , Células Piramidais/fisiologia , Células Piramidais/ultraestrutura , Roedores , Sinapses/fisiologia , Sinapses/ultraestrutura , Lobo Temporal/fisiologia , Técnicas de Patch-Clamp
3.
Front Comput Neurosci ; 17: 1136010, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37293353

RESUMO

A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.

4.
iScience ; 23(9): 101440, 2020 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-32827856

RESUMO

As one of the most important paradigms of recurrent neural networks, the echo state network (ESN) has been applied to a wide range of fields, from robotics to medicine, finance, and language processing. A key feature of the ESN paradigm is its reservoir-a directed and weighted network of neurons that projects the input time series into a high-dimensional space where linear regression or classification can be applied. By analyzing the dynamics of the reservoir we show that the ensemble of eigenvalues of the network contributes to the ESN memory capacity. Moreover, we find that adding short loops to the reservoir network can tailor ESN for specific tasks and optimize learning. We validate our findings by applying ESN to forecast both synthetic and real benchmark time series. Our results provide a simple way to design task-specific ESN and offer deep insights for other recurrent neural networks.

5.
Front Neurosci ; 14: 420, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32528239

RESUMO

Precise spike timing and temporal coding are used extensively within the nervous system of insects and in the sensory periphery of higher order animals. However, conventional Artificial Neural Networks (ANNs) and machine learning algorithms cannot take advantage of this coding strategy, due to their rate-based representation of signals. Even in the case of artificial Spiking Neural Networks (SNNs), identifying applications where temporal coding outperforms the rate coding strategies of ANNs is still an open challenge. Neuromorphic sensory-processing systems provide an ideal context for exploring the potential advantages of temporal coding, as they are able to efficiently extract the information required to cluster or classify spatio-temporal activity patterns from relative spike timing. Here we propose a neuromorphic model inspired by the sand scorpion to explore the benefits of temporal coding, and validate it in an event-based sensory-processing task. The task consists in localizing a target using only the relative spike timing of eight spatially-separated vibration sensors. We propose two different approaches in which the SNNs learns to cluster spatio-temporal patterns in an unsupervised manner and we demonstrate how the task can be solved both analytically and through numerical simulation of multiple SNN models. We argue that the models presented are optimal for spatio-temporal pattern classification using precise spike timing in a task that could be used as a standard benchmark for evaluating event-based sensory processing models based on temporal coding.

6.
Phys Rev E ; 100(1-1): 010302, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31499759

RESUMO

The celebrated elliptic law describes the distribution of eigenvalues of random matrices with correlations between off-diagonal pairs of elements, having applications to a wide range of physical and biological systems. Here, we investigate the generalization of this law to random matrices exhibiting higher-order cyclic correlations between k tuples of matrix entries. We show that the eigenvalue spectrum in this ensemble is bounded by a hypotrochoid curve with k-fold rotational symmetry. This hypotrochoid law applies to full matrices as well as sparse ones, and thereby holds with remarkable universality. We further extend our analysis to matrices and graphs with competing cycle motifs, which are described more generally by polytrochoid spectral boundaries.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA