Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Curr Opin Neurobiol ; 83: 102780, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37757585

RESUMO

Neural circuits-both in the brain and in "artificial" neural network models-learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as "lazy" or "rich" depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including "hidden layers"). We further elucidate rich networks through the lens of compression and "neural collapse", ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.


Assuntos
Redes Neurais de Computação , Neurociências , Encéfalo , Aprendizado de Máquina
2.
Patterns (N Y) ; 3(8): 100555, 2022 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-36033586

RESUMO

A fundamental problem in science is uncovering the effective number of degrees of freedom in a complex system: its dimensionality. A system's dimensionality depends on its spatiotemporal scale. Here, we introduce a scale-dependent generalization of a classic enumeration of latent variables, the participation ratio. We demonstrate how the scale-dependent participation ratio identifies the appropriate dimension at local, intermediate, and global scales in several systems such as the Lorenz attractor, hidden Markov models, and switching linear dynamical systems. We show analytically how, at different limiting scales, the scale-dependent participation ratio relates to well-established measures of dimensionality. This measure applied in neural population recordings across multiple brain areas and brain states shows fundamental trends in the dimensionality of neural activity-for example, in behaviorally engaged versus spontaneous states. Our novel method unifies widely used measures of dimensionality and applies broadly to multivariate data across several fields of science.

3.
Neural Comput ; 34(3): 541-594, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35016220

RESUMO

As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.


Assuntos
Córtex Visual , Animais , Encéfalo , Aprendizagem , Neurônios/fisiologia , Estimulação Luminosa , Córtex Visual/fisiologia , Percepção Visual/fisiologia
4.
Neuron ; 110(1): 139-153.e9, 2022 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-34717794

RESUMO

The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.


Assuntos
Córtex Motor , Animais , Comportamento Animal , Ratos , Reprodutibilidade dos Testes
5.
Nat Comput Sci ; 2(8): 475-476, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38177793
6.
Neural Netw ; 141: 330-343, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33957382

RESUMO

Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that - even with well constrained neural dynamics - there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.


Assuntos
Conectoma , Redes Neurais de Computação
7.
Nat Commun ; 12(1): 1417, 2021 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-33658520

RESUMO

Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task's low-dimensional latent structure in the network activity - i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.

8.
Sci Rep ; 9(1): 10448, 2019 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-31320693

RESUMO

Structured information is easier to remember and recall than random one. In real life, information exhibits multi-level hierarchical organization, such as clauses, sentences, episodes and narratives in language. Here we show that multi-level grouping emerges even when participants perform memory recall experiments with random sets of words. To quantitatively probe brain mechanisms involved in memory structuring, we consider an experimental protocol where participants perform 'final free recall' (FFR) of several random lists of words each of which was first presented and recalled individually. We observe a hierarchy of grouping organizations of FFR, most notably many participants sequentially recalled relatively long chunks of words from each list before recalling words from another list. Moreover, participants who exhibited strongest organization during FFR achieved highest levels of performance. Based on these results, we develop a hierarchical model of memory recall that is broadly compatible with our findings. Our study shows how highly controlled memory experiments with random and meaningless material, when combined with simple models, can be used to quantitatively probe the way meaningful information can efficiently be organized and processed in the brain.


Assuntos
Generalização Psicológica/fisiologia , Memória de Curto Prazo/fisiologia , Rememoração Mental/fisiologia , Modelos Teóricos , Adolescente , Adulto , Humanos , Idioma , Adulto Jovem
9.
PLoS Comput Biol ; 15(7): e1006446, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31299044

RESUMO

The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.


Assuntos
Potenciais de Ação/fisiologia , Rede Nervosa/fisiologia , Humanos , Modelos Neurológicos
10.
Neural Comput ; 29(10): 2684-2711, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28777725

RESUMO

Human memory is capable of retrieving similar memories to a just retrieved one. This associative ability is at the base of our everyday processing of information. Current models of memory have not been able to underpin the mechanism that the brain could use in order to actively exploit similarities between memories. The current idea is that to induce transitions in attractor neural networks, it is necessary to extinguish the current memory. We introduce a novel mechanism capable of inducing transitions between memories where similarities between memories are actively exploited by the neural dynamics to retrieve a new memory. Populations of neurons that are selective for multiple memories play a crucial role in this mechanism by becoming attractors on their own. The mechanism is based on the ability of the neural network to control the excitation-inhibition balance.


Assuntos
Redes Neurais de Computação , Associação , Encéfalo/fisiologia , Humanos , Memória/fisiologia , Modelos Neurológicos , Neurônios/fisiologia
11.
Front Comput Neurosci ; 9: 149, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26732491

RESUMO

Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA