Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
PLoS Comput Biol ; 19(10): e1011509, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37824442

RESUMO

A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.


Assuntos
Algoritmos , Neurônios , Camundongos , Animais , Simulação por Computador , Neurônios/fisiologia , Probabilidade , Modelos Neurológicos , Potenciais de Ação/fisiologia
2.
Proc Natl Acad Sci U S A ; 118(8)2021 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-33593894

RESUMO

Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.


Assuntos
Modelos Neurológicos , Dinâmica não Linear , Retina/fisiologia , Sinapses/fisiologia , Transmissão Sináptica , Vias Visuais/fisiologia , Humanos
3.
Proc Natl Acad Sci U S A ; 118(51)2021 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-34916291

RESUMO

Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type-specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type-specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.


Assuntos
Rede Nervosa/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Simulação por Computador , Aprendizagem/fisiologia , Ligantes , Modelos Neurológicos , Redes Neurais de Computação , Plasticidade Neuronal/genética , Receptores Acoplados a Proteínas G/genética , Receptores Acoplados a Proteínas G/metabolismo , Análise Espaço-Temporal , Transmissão Sináptica
4.
Neural Comput ; 35(4): 555-592, 2023 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-36827598

RESUMO

Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.


Assuntos
Percepção do Tempo , Potenciais de Ação/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Redes Neurais de Computação
5.
PLoS Comput Biol ; 18(9): e1010427, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36067234

RESUMO

Convolutional neural networks trained on object recognition derive inspiration from the neural architecture of the visual system in mammals, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the deep hierarchical organization of primates, the visual system of the mouse has a shallower arrangement. Since mice and primates are both capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex. The architecture and structural parameters of the network are derived from experimental measurements, specifically the 100-micrometer resolution interareal connectome, the estimates of numbers of neurons in each area and cortical layer, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. Using a well-studied image classification task as our working example, we demonstrate the computational capability of this mouse-sized network. Given its relatively small size, MouseNet achieves roughly 2/3rds the performance level on ImageNet as VGG16. In combination with the large scale Allen Brain Observatory Visual Coding dataset, we use representational similarity analysis to quantify the extent to which MouseNet recapitulates the neural representation in mouse visual cortex. Importantly, we provide evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point. We demonstrate that the distributions of some physiological quantities are closer to the observed distributions in the mouse brain after task training. We encourage the use of the MouseNet architecture by making the code freely available.


Assuntos
Redes Neurais de Computação , Córtex Visual , Animais , Mamíferos , Camundongos , Neurônios/fisiologia , Córtex Visual/fisiologia , Percepção Visual
6.
Neural Comput ; 34(3): 541-594, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35016220

RESUMO

As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.


Assuntos
Córtex Visual , Animais , Encéfalo , Aprendizagem , Neurônios/fisiologia , Estimulação Luminosa , Córtex Visual/fisiologia , Percepção Visual/fisiologia
7.
PLoS Comput Biol ; 15(7): e1006446, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31299044

RESUMO

The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.


Assuntos
Potenciais de Ação/fisiologia , Rede Nervosa/fisiologia , Humanos , Modelos Neurológicos
8.
PLoS Comput Biol ; 14(10): e1006490, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30346943

RESUMO

A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the "hidden" portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r' to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r' to r plus corrections from every directed path from r' to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have-or do not have-major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with "strong" coupling-connection weights that scale with [Formula: see text], where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions.


Assuntos
Modelos Neurológicos , Modelos Estatísticos , Neurônios/fisiologia , Sinapses/fisiologia , Biologia Computacional
9.
Neural Comput ; 30(5): 1209-1257, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29566355

RESUMO

The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes.


Assuntos
Aprendizagem por Discriminação/fisiologia , Neurônios/fisiologia , Dinâmica não Linear , Córtex Visual/citologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Animais , Macaca mulatta , Modelos Neurológicos , Estimulação Luminosa , Córtex Pré-Frontal/citologia , Probabilidade
10.
PLoS Comput Biol ; 13(6): e1005583, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28644840

RESUMO

Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities-including those of different cell types-combine with connectivity to shape population activity and function.


Assuntos
Potenciais de Ação/fisiologia , Conectoma/métodos , Modelos Neurológicos , Rede Nervosa/citologia , Rede Nervosa/fisiologia , Dinâmica não Linear , Animais , Simulação por Computador , Humanos , Modelos Anatômicos , Modelos Estatísticos , Relação Estrutura-Atividade
11.
PLoS Comput Biol ; 13(4): e1005497, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28419098

RESUMO

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can-in some cases-optimize robustness against noise.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Células Receptoras Sensoriais/fisiologia , Biologia Computacional , Simulação por Computador
12.
Entropy (Basel) ; 20(7)2018 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-33265579

RESUMO

Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting "Reliable Moment" model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.

13.
J Neurophysiol ; 118(4): 2070-2088, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28615332

RESUMO

Unraveling the interplay of excitation and inhibition within rhythm-generating networks remains a fundamental issue in neuroscience. We use a biophysical model to investigate the different roles of local and long-range inhibition in the respiratory network, a key component of which is the pre-Bötzinger complex inspiratory microcircuit. Increasing inhibition within the microcircuit results in a limited number of out-of-phase neurons before rhythmicity and synchrony degenerate. Thus unstructured local inhibition is destabilizing and cannot support the generation of more than one rhythm. A two-phase rhythm requires restructuring the network into two microcircuits coupled by long-range inhibition in the manner of a half-center. In this context, inhibition leads to greater stability of the two out-of-phase rhythms. We support our computational results with in vitro recordings from mouse pre-Bötzinger complex. Partial excitation block leads to increased rhythmic variability, but this recovers after blockade of inhibition. Our results support the idea that local inhibition in the pre-Bötzinger complex is present to allow for descending control of synchrony or robustness to adverse conditions like hypoxia. We conclude that the balance of inhibition and excitation determines the stability of rhythmogenesis, but with opposite roles within and between areas. These different inhibitory roles may apply to a variety of rhythmic behaviors that emerge in widespread pattern-generating circuits of the nervous system.NEW & NOTEWORTHY The roles of inhibition within the pre-Bötzinger complex (preBötC) are a matter of debate. Using a combination of modeling and experiment, we demonstrate that inhibition affects synchrony, period variability, and overall frequency of the preBötC and coupled rhythmogenic networks. This work expands our understanding of ubiquitous motor and cognitive oscillatory networks.


Assuntos
Geradores de Padrão Central/fisiologia , Modelos Neurológicos , Respiração , Centro Respiratório/fisiologia , Animais , Camundongos , Inibição Neural
14.
PLoS Comput Biol ; 12(12): e1005258, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27973557

RESUMO

Highly connected recurrent neural networks often produce chaotic dynamics, meaning their precise activity is sensitive to small perturbations. What are the consequences of chaos for how such networks encode streams of temporal stimuli? On the one hand, chaos is a strong source of randomness, suggesting that small changes in stimuli will be obscured by intrinsically generated variability. On the other hand, recent work shows that the type of chaos that occurs in spiking networks can have a surprisingly low-dimensional structure, suggesting that there may be room for fine stimulus features to be precisely resolved. Here we show that strongly chaotic networks produce patterned spikes that reliably encode time-dependent stimuli: using a decoder sensitive to spike times on timescales of 10's of ms, one can easily distinguish responses to very similar inputs. Moreover, recurrence serves to distribute signals throughout chaotic networks so that small groups of cells can encode substantial information about signals arriving elsewhere. A conclusion is that the presence of strong chaos in recurrent networks need not exclude precise encoding of temporal stimuli via spike patterns.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Dinâmica não Linear , Biologia Computacional , Neurônios/fisiologia
15.
PLoS Comput Biol ; 12(10): e1005150, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27741248

RESUMO

Neural circuits reliably encode and transmit signals despite the presence of noise at multiple stages of processing. The efficient coding hypothesis, a guiding principle in computational neuroscience, suggests that a neuron or population of neurons allocates its limited range of responses as efficiently as possible to best encode inputs while mitigating the effects of noise. Previous work on this question relies on specific assumptions about where noise enters a circuit, limiting the generality of the resulting conclusions. Here we systematically investigate how noise introduced at different stages of neural processing impacts optimal coding strategies. Using simulations and a flexible analytical approach, we show how these strategies depend on the strength of each noise source, revealing under what conditions the different noise sources have competing or complementary effects. We draw two primary conclusions: (1) differences in encoding strategies between sensory systems-or even adaptational changes in encoding properties within a given system-may be produced by changes in the structure or location of neural noise, and (2) characterization of both circuit nonlinearities as well as noise are necessary to evaluate whether a circuit is performing efficiently.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Modelos Neurológicos , Modelos Estatísticos , Rede Nervosa/fisiologia , Células Receptoras Sensoriais/fisiologia , Transmissão Sináptica/fisiologia , Animais , Simulação por Computador , Humanos , Razão Sinal-Ruído
16.
J Neurosci ; 35(28): 10112-34, 2015 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-26180189

RESUMO

While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. SIGNIFICANCE STATEMENT: We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation.


Assuntos
Potenciais de Ação/fisiologia , Fenômenos Biofísicos/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Animais , Biofísica , Simulação por Computador , Sinapses/fisiologia
17.
PLoS Comput Biol ; 10(2): e1003469, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24586128

RESUMO

Over repeat presentations of the same stimulus, sensory neurons show variable responses. This "noise" is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem - neural tuning curves, etc. - held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) - if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.


Assuntos
Modelos Neurológicos , Células Receptoras Sensoriais/fisiologia , Animais , Encéfalo/fisiologia , Biologia Computacional , Modelos Lineares , Conceitos Matemáticos , Rede Nervosa/fisiologia , Distribuição Normal , Razão Sinal-Ruído
18.
ArXiv ; 2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-37873007

RESUMO

In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.

19.
bioRxiv ; 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38798582

RESUMO

Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.

20.
J Neurophysiol ; 109(10): 2542-59, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23446688

RESUMO

A key step in many perceptual decision tasks is the integration of sensory inputs over time, but a fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be exceedingly precise. The need for fine tuning can be overcome via a "robust integrator" mechanism in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this limiting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. Here, we analyze the consequences of this tradeoff for decision-making performance. For concreteness, we focus on the well-studied random dot motion discrimination task and constrain stimulus parameters by experimental data. We show that mistuning feedback in an integrator circuit decreases decision performance but that the robust integrator mechanism can limit this loss. Intriguingly, even for perfectly tuned circuits with no immediate need for a robustness mechanism, including one often does not impose a substantial penalty for decision-making performance. The implication is that robust integrators may be well suited to subserve the basic function of evidence integration in many cognitive tasks. We develop these ideas using simulations of coupled neural units and the mathematics of sequential analysis.


Assuntos
Tomada de Decisões , Modelos Neurológicos , Rede Nervosa/fisiologia , Discriminação Psicológica , Retroalimentação Fisiológica , Humanos , Rede Nervosa/citologia , Células Receptoras Sensoriais/fisiologia , Sinapses/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA