Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(34): e2219150120, 2023 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-37579149

RESUMO

Glial cells account for between 50% and 90% of all human brain cells, and serve a variety of important developmental, structural, and metabolic functions. Recent experimental efforts suggest that astrocytes, a type of glial cell, are also directly involved in core cognitive processes such as learning and memory. While it is well established that astrocytes and neurons are connected to one another in feedback loops across many timescales and spatial scales, there is a gap in understanding the computational role of neuron-astrocyte interactions. To help bridge this gap, we draw on recent advances in AI and astrocyte imaging technology. In particular, we show that neuron-astrocyte networks can naturally perform the core computation of a Transformer, a particularly successful type of AI architecture. In doing so, we provide a concrete, normative, and experimentally testable account of neuron-astrocyte communication. Because Transformers are so successful across a wide variety of task domains, such as language, vision, and audition, our analysis may help explain the ubiquity, flexibility, and power of the brain's neuron-astrocyte networks.


Assuntos
Astrócitos , Neurônios , Humanos , Astrócitos/fisiologia , Neurônios/fisiologia , Neuroglia/fisiologia , Encéfalo
2.
Proc Natl Acad Sci U S A ; 116(16): 7723-7731, 2019 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-30926658

RESUMO

It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb's idea that change of the synapse strength should be local-i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.

3.
Neural Comput ; 30(12): 3151-3167, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30314425

RESUMO

Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNNs and humans classify patterns and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our article examines these questions within the framework of dense associative memory (DAM) models. These models are defined by the energy function, with higher-order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units, fail to transfer to and fool the models with higher-order interactions. This opens up the possibility of using higher-order models for detecting and stopping malicious adversarial attacks. The results we present suggest that DAMs with higher-order energy functions are more robust to adversarial and rubbish inputs than DNNs with rectified linear units.


Assuntos
Encéfalo/fisiologia , Redes Neurais de Computação , Humanos , Reconhecimento Visual de Modelos/fisiologia
4.
Proc Natl Acad Sci U S A ; 111(10): 3683-8, 2014 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-24516161

RESUMO

Spatial patterns in the early fruit fly embryo emerge from a network of interactions among transcription factors, the gap genes, driven by maternal inputs. Such networks can exhibit many qualitatively different behaviors, separated by critical surfaces. At criticality, we should observe strong correlations in the fluctuations of different genes around their mean expression levels, a slowing of the dynamics along some but not all directions in the space of possible expression levels, correlations of expression fluctuations over long distances in the embryo, and departures from a Gaussian distribution of these fluctuations. Analysis of recent experiments on the gap gene network shows that all these signatures are observed, and that the different signatures are related in ways predicted by theory. Although there might be other explanations for these individual phenomena, the confluence of evidence suggests that this genetic network is tuned to criticality.


Assuntos
Evolução Biológica , Drosophila/fisiologia , Regulação da Expressão Gênica no Desenvolvimento/fisiologia , Redes Reguladoras de Genes/fisiologia , Modelos Biológicos , Morfogênese/fisiologia , Animais , Proteínas de Ligação a DNA/genética , Proteínas de Ligação a DNA/metabolismo , Proteínas de Drosophila/genética , Proteínas de Drosophila/metabolismo , Embrião não Mamífero/fisiologia , Fatores de Transcrição Kruppel-Like/genética , Fatores de Transcrição Kruppel-Like/metabolismo , Termodinâmica , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo
5.
Front Big Data ; 5: 1044709, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36466714

RESUMO

The network embedding task is to represent a node in a network as a low-dimensional vector while incorporating the topological and structural information. Most existing approaches solve this problem by factorizing a proximity matrix, either directly or implicitly. In this work, we introduce a network embedding method from a new perspective, which leverages Modern Hopfield Networks (MHN) for associative learning. Our network learns associations between the content of each node and that node's neighbors. These associations serve as memories in the MHN. The recurrent dynamics of the network make it possible to recover the masked node, given that node's neighbors. Our proposed method is evaluated on different benchmark datasets for downstream tasks such as node classification, link prediction, and graph coarsening. The results show competitive performance compared to the common matrix factorization techniques and deep learning based methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA