Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
1.
Dan Med J ; 71(6)2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38847414

RESUMO

This essay is dedicated to the memory of my father David Sompolinsky. As a medical student in Veterinary Medicine in Copenhagen, with the support of his professors and the Danish Resistance, David organised the rescue of 700 Danish Jews in October 1943, helping them escape Nazi persecution and find safety in Sweden.


Assuntos
Socialismo Nacional , Humanos , História do Século XX , Dinamarca , Animais , Socialismo Nacional/história , Judeus/história
2.
Proc Natl Acad Sci U S A ; 121(27): e2311805121, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38913896

RESUMO

Humans and animals excel at generalizing from limited data, a capability yet to be fully replicated in artificial intelligence. This perspective investigates generalization in biological and artificial deep neural networks (DNNs), in both in-distribution and out-of-distribution contexts. We introduce two hypotheses: First, the geometric properties of the neural manifolds associated with discrete cognitive entities, such as objects, words, and concepts, are powerful order parameters. They link the neural substrate to the generalization capabilities and provide a unified methodology bridging gaps between neuroscience, machine learning, and cognitive science. We overview recent progress in studying the geometry of neural manifolds, particularly in visual object recognition, and discuss theories connecting manifold dimension and radius to generalization capacity. Second, we suggest that the theory of learning in wide DNNs, especially in the thermodynamic limit, provides mechanistic insights into the learning processes generating desired neural representational geometries and generalization. This includes the role of weight norm regularization, network architecture, and hyper-parameters. We will explore recent advances in this theory and ongoing challenges. We also discuss the dynamics of learning and its relevance to the issue of representational drift in the brain.


Assuntos
Encéfalo , Redes Neurais de Computação , Encéfalo/fisiologia , Humanos , Animais , Inteligência Artificial , Modelos Neurológicos , Generalização Psicológica/fisiologia , Cognição/fisiologia
3.
iScience ; 26(4): 106373, 2023 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-37009217

RESUMO

Some decisions make a difference, but most are arbitrary and inconsequential, like which of several identical new pairs of socks should I wear? Healthy people swiftly make such decisions even with no rational reasons to rely on. In fact, arbitrary decisions have been suggested as demonstrating "free will". However, several clinical populations and some healthy individuals have significant difficulties in making such arbitrary decisions. Here, we investigate the mechanisms involved in arbitrary picking decisions. We show that these decisions, arguably based on a whim, are subject to similar control mechanisms as reasoned decisions. Specifically, error-related negativity (ERN) brain response is elicited in the EEG following change of intention, without an external definition of error, and motor activity in the non-responding hand resembles actual errors both by its muscle EMG temporal dynamics and by the lateralized readiness potential (LRP) pattern. This provides new directions in understanding decision-making and its deficits.

4.
Neural Comput ; 35(2): 105-155, 2023 01 20.
Artigo em Inglês | MEDLINE | ID: mdl-36543330

RESUMO

Binding operation is fundamental to many cognitive processes, such as cognitive map formation, relational reasoning, and language comprehension. In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms. Previous work has introduced a binding model based on quadratic functions of bound pairs, followed by vector summation of multiple pairs. Based on this framework, we address the following questions: Which classes of quadratic matrices are optimal for decoding relational structures? And what is the resultant accuracy? We introduce a new class of binding matrices based on a matrix representation of octonion algebra, an eight-dimensional extension of complex numbers. We show that these matrices enable a more accurate unbinding than previously known methods when a small number of pairs are present. Moreover, numerical optimization of a binding operator converges to this octonion binding. We also show that when there are a large number of bound pairs, however, a random quadratic binding performs, as well as the octonion and previously proposed binding methods. This study thus provides new insight into potential neural mechanisms of binding operations in the brain.


Assuntos
Encéfalo , Idioma
5.
Sci Rep ; 12(1): 21808, 2022 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-36528630

RESUMO

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture scheme.We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Memória/fisiologia , Rememoração Mental , Neurônios/fisiologia
6.
Proc Natl Acad Sci U S A ; 119(43): e2200800119, 2022 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-36251997

RESUMO

Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.


Assuntos
Formação de Conceito , Redes Neurais de Computação , Animais , Humanos , Aprendizagem/fisiologia , Macaca , Plásticos , Primatas , Vias Visuais/fisiologia
7.
Phys Rev E ; 106(2-1): 024126, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36109959

RESUMO

A neural population responding to multiple appearances of a single object defines a manifold in the neural response space. The ability to classify such manifolds is of interest, as object recognition and other computational tasks require a response that is insensitive to variability within a manifold. Linear classification of object manifolds was previously studied for max-margin classifiers. Soft-margin classifiers are a larger class of algorithms and provide an additional regularization parameter used in applications to optimize performance outside the training set by balancing between making fewer training errors and learning more robust classifiers. Here we develop a mean-field theory describing the behavior of soft-margin classifiers applied to object manifolds. Analyzing manifolds with increasing complexity, from points through spheres to general manifolds, a mean-field theory describes the expected value of the linear classifier's norm, as well as the distribution of fields and slack variables. By analyzing the robustness of the learned classification to noise, we can predict the probability of classification errors and their dependence on regularization, demonstrating a finite optimal choice. The theory describes a previously unknown phase transition, corresponding to the disappearance of a nontrivial solution, thus providing a soft version of the well-known classification capacity of max-margin classifiers. Furthermore, for high-dimensional manifolds of any shape, the theory prescribes how to define manifold radius and dimension, two measurable geometric quantities that capture the aspects of manifold shape relevant to soft classification.

8.
Sci Rep ; 12(1): 13107, 2022 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-35907920

RESUMO

Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism: memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation. This mechanism gives rise to memory lifetimes that extend much longer than the synaptic decay time, and retrieval probability of memories that gracefully decays with their age. The number of retrievable memories is proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.


Assuntos
Consolidação da Memória , Memória , Humanos , Aprendizagem/fisiologia , Memória/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia
9.
PLoS Comput Biol ; 18(7): e1010327, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35862445

RESUMO

A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics' Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data.


Assuntos
Modelos Neurológicos , Neurociências , Encéfalo/fisiologia , Neurônios/fisiologia , Análise de Componente Principal
10.
Phys Rev E ; 106(6-1): 064406, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36671118

RESUMO

Perceptual learning (PL) involves long-lasting improvement in perceptual tasks following extensive training and is accompanied by modified neuronal responses in sensory cortical areas in the brain. Understanding the dynamics of PL and the resultant synaptic changes is important for causally connecting PL to the observed neural plasticity. This is theoretically challenging because learning-related changes are distributed across many stages of the sensory hierarchy. In this paper, we modeled the sensory hierarchy as a deep nonlinear neural network and studied PL of fine discrimination, a common and well-studied paradigm of PL. Using tools from statistical physics, we developed a mean-field theory of the network in the limit of a large number of neurons and large number of examples. Our theory suggests that, in this thermodynamic limit, the input-output function of the network can be exactly mapped to that of a deep linear network, allowing us to characterize the space of solutions for the task. Surprisingly, we found that modifying synaptic weights in the first layer of the hierarchy is both sufficient and necessary for PL. To address the degeneracy of the space of solutions, we postulate that PL dynamics are constrained by a normative minimum perturbation (MP) principle, which favors weight matrices with minimal changes relative to their prelearning values. Interestingly, MP plasticity induces changes to weights and neural representations in all layers of the network, except for the readout weight vector. While weight changes in higher layers are not necessary for learning, they help reduce overall perturbation to the network. In addition, such plasticity can be learned simply through slow learning. We further elucidate the properties of MP changes and compare them against experimental findings. Overall, our statistical mechanics theory of PL provides mechanistic and normative understanding of several important empirical findings of PL.


Assuntos
Encéfalo , Aprendizagem , Aprendizagem/fisiologia , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia
11.
Nature ; 596(7872): 404-409, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34381211

RESUMO

As animals navigate on a two-dimensional surface, neurons in the medial entorhinal cortex (MEC) known as grid cells are activated when the animal passes through multiple locations (firing fields) arranged in a hexagonal lattice that tiles the locomotion surface1. However, although our world is three-dimensional, it is unclear how the MEC represents 3D space2. Here we recorded from MEC cells in freely flying bats and identified several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing fields. Many of these multifield neurons were 3D grid cells, whose neighbouring fields were separated by a characteristic distance-forming a local order-but lacked any global lattice arrangement of the fields. Thus, whereas 2D grid cells form a global lattice-characterized by both local and global order-3D grid cells exhibited only local order, creating a locally ordered metric for space. We modelled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D, thereby describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.


Assuntos
Quirópteros/fisiologia , Percepção de Profundidade/fisiologia , Córtex Entorrinal/citologia , Córtex Entorrinal/fisiologia , Células de Grade/fisiologia , Modelos Neurológicos , Animais , Comportamento Animal/fisiologia , Voo Animal/fisiologia , Masculino
12.
Phys Rev E ; 103(2-1): 022404, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33736047

RESUMO

Many sensory pathways in the brain include sparsely active populations of neurons downstream from the input stimuli. The biological purpose of this expanded structure is unclear, but it may be beneficial due to the increased expressive power of the network. In this work, we show that certain ways of expanding a neural network can improve its generalization performance even when the expanded structure is pruned after the learning period. To study this setting, we use a teacher-student framework where a perceptron teacher network generates labels corrupted with small amounts of noise. We then train a student network structurally matched to the teacher. In this scenario, the student can achieve optimal accuracy if given the teacher's synaptic weights. We find that sparse expansion of the input layer of a student perceptron network both increases its capacity and improves the generalization performance of the network when learning a noisy rule from a teacher perceptron when the expansion is pruned after learning. We find similar behavior when the expanded units are stochastic and uncorrelated with the input and analyze this network in the mean-field limit. By solving the mean-field equations, we show that the generalization error of the stochastic expanded student network continues to drop as the size of the network increases. This improvement in generalization performance occurs despite the increased complexity of the student network relative to the teacher it is trying to learn. We show that this effect is closely related to the addition of slack variables in artificial neural networks and suggest possible implications for artificial and biological neural networks.


Assuntos
Aprendizagem , Modelos Neurológicos , Rede Nervosa/fisiologia
13.
Phys Rev E ; 104(6-1): 064301, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35030925

RESUMO

A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the neural tangent kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its weights maps to the so-called neural network Gaussian process (NNGP). Here we consider a DNN training protocol, involving noise, weight decay, and finite width, whose outcome corresponds to a certain non-Gaussian stochastic process. An analytical framework is then introduced to analyze this non-Gaussian process, whose deviation from a GP is controlled by the finite width. Our contribution is threefold: (i) In the infinite width limit, we establish a correspondence between DNNs trained with noisy gradients and the NNGP, not the NTK. (ii) We provide a general analytical form for the finite width correction (FWC) for DNNs with arbitrary activation functions and depth and use it to predict the outputs of empirical finite networks with high accuracy. Analyzing the FWC behavior as a function of n, the training set size, we find that it is negligible for both the very small n regime, and, surprisingly, for the large n regime [where the GP error scales as O(1/n)]. (iii) We flesh out algebraically how these FWCs can improve the performance of finite convolutional neural networks (CNNs) relative to their GP counterparts on image classification tasks.

14.
Neural Netw ; 132: 428-446, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33022471

RESUMO

We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant "high-dimensional" regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that standard application of theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation.


Assuntos
Aprendizado Profundo
15.
Nat Commun ; 11(1): 746, 2020 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-32029727

RESUMO

Stimuli are represented in the brain by the collective population responses of sensory neurons, and an object presented under varying conditions gives rise to a collection of neural population responses called an 'object manifold'. Changes in the object representation along a hierarchical sensory system are associated with changes in the geometry of those manifolds, and recent theoretical progress connects this geometry with 'classification capacity', a quantitative measure of the ability to support object classification. Deep neural networks trained on object classification tasks are a natural testbed for the applicability of this relation. We show how classification capacity improves along the hierarchies of deep neural networks with different architectures. We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds' radius, dimensionality and inter-manifold correlations.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Percepção Visual/fisiologia , Algoritmos , Encéfalo/fisiologia , Aprendizado Profundo , Humanos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa , Células Receptoras Sensoriais/fisiologia
16.
PLoS Comput Biol ; 15(11): e1007476, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31725714

RESUMO

In many sensory systems the neural signal is coded by the coordinated response of heterogeneous populations of neurons. What computational benefit does this diversity confer on information processing? We derive an efficient coding framework assuming that neurons have evolved to communicate signals optimally given natural stimulus statistics and metabolic constraints. Incorporating nonlinearities and realistic noise, we study optimal population coding of the same sensory variable using two measures: maximizing the mutual information between stimuli and responses, and minimizing the error incurred by the optimal linear decoder of responses. Our theory is applied to a commonly observed splitting of sensory neurons into ON and OFF that signal stimulus increases or decreases, and to populations of monotonically increasing responses of the same type, ON. Depending on the optimality measure, we make different predictions about how to optimally split a population into ON and OFF, and how to allocate the firing thresholds of individual neurons given realistic stimulus distributions and noise, which accord with certain biases observed experimentally.


Assuntos
Rede Nervosa/fisiologia , Células Receptoras Sensoriais/metabolismo , Potenciais de Ação/fisiologia , Animais , Encéfalo/fisiologia , Humanos , Modelos Neurológicos , Modelos Teóricos , Células Receptoras Sensoriais/fisiologia
17.
Front Hum Neurosci ; 13: 264, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31417383

RESUMO

Decision making often requires making arbitrary choices ("picking") between alternatives that make no difference to the agent, that are equally desirable, or when the potential reward is unknown. Using event-related potentials we tested the effect of age on this common type of decision making. We compared two age groups: ages 18-25, and ages 41-67 on a masked-priming paradigm while recording EEG and EMG. Participants pressed a right or left button following either an instructive arrow cue or a neutral free-choice picking cue, both preceded by a masked arrow or neutral prime. The prime affected the behavior on the Instructed and the Free-choice picking conditions both in the younger and older groups. Moreover, electrophysiological "Change of Intention" (ChoI) was observed via lateralized readiness potential (LRP) in both age groups - the polarity of the LRP indicated first preparation to move the primed hand and then preparation to move the other hand. However, the older participants were more conservative in responding to the instructive cue, exhibiting a speed-accuracy trade-off, with slower response times, less errors in incongruent trials, and reduced probability of EMG activity in the non-responding hand. Additionally, "Change of Intention" was observed in both age groups in slow RT trials with a neutral prime as a result of an endogenous early intention to respond in a direction opposite the eventual instructing arrow cue. We conclude that the basic behavioral and electrophysiological signatures of implicit ChoI are common to a wide range of ages. However, older subjects, despite showing a similar dynamic decision trajectory as younger adults, are slower, more prudent and finalize the decision making process before letting the information affect the peripheral motor system. In contrast, the flow of information in younger subjects occurs in parallel to the decision process.

18.
Front Neural Circuits ; 13: 82, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32047424

RESUMO

Associative learning of pure tones is known to cause tonotopic map expansion in the auditory cortex (ACx), but the function this plasticity sub-serves is unclear. We developed an automated training platform called the "Educage," which was used to train mice on a go/no-go auditory discrimination task to their perceptual limits, for difficult discriminations among pure tones or natural sounds. Spiking responses of excitatory and inhibitory parvalbumin (PV+) L2/3 neurons in mouse ACx revealed learning-induced overrepresentation of the learned frequencies, as expected from previous literature. The coordinated plasticity of excitatory and inhibitory neurons supports a role for PV+ neurons in homeostatic maintenance of excitation-inhibition balance within the circuit. Using a novel computational model to study auditory tuning curves, we show that overrepresentation of the learned tones does not necessarily improve discrimination performance of the network to these tones. In a separate set of experiments, we trained mice to discriminate among natural sounds. Perceptual learning of natural sounds induced "sparsening" and decorrelation of the neural response, consequently improving discrimination of these complex sounds. This signature of plasticity in A1 highlights its role in coding natural sounds.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Aprendizagem por Discriminação/fisiologia , Plasticidade Neuronal/fisiologia , Animais , Feminino , Camundongos , Camundongos Endogâmicos C57BL , Desempenho Psicomotor/fisiologia
19.
PLoS Comput Biol ; 14(12): e1006309, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30543634

RESUMO

We present a simple model for coherent, spatially correlated chaos in a recurrent neural network. Networks of randomly connected neurons exhibit chaotic fluctuations and have been studied as a model for capturing the temporal variability of cortical activity. The dynamics generated by such networks, however, are spatially uncorrelated and do not generate coherent fluctuations, which are commonly observed across spatial scales of the neocortex. In our model we introduce a structured component of connectivity, in addition to random connections, which effectively embeds a feedforward structure via unidirectional coupling between a pair of orthogonal modes. Local fluctuations driven by the random connectivity are summed by an output mode and drive coherent activity along an input mode. The orthogonality between input and output mode preserves chaotic fluctuations by preventing feedback loops. In the regime of weak structured connectivity we apply a perturbative approach to solve the dynamic mean-field equations, showing that in this regime coherent fluctuations are driven passively by the chaos of local residual fluctuations. When we introduce a row balance constraint on the random connectivity, stronger structured connectivity puts the network in a distinct dynamical regime of self-tuned coherent chaos. In this regime the coherent component of the dynamics self-adjusts intermittently to yield periods of slow, highly coherent chaos. The dynamics display longer time-scales and switching-like activity. We show how in this regime the dynamics depend qualitatively on the particular realization of the connectivity matrix: a complex leading eigenvalue can yield coherent oscillatory chaos while a real leading eigenvalue can yield chaos with broken symmetry. The level of coherence grows with increasing strength of structured connectivity until the dynamics are almost entirely constrained to a single spatial mode. We examine the effects of network-size scaling and show that these results are not finite-size effects. Finally, we show that in the regime of weak structured connectivity, coherent chaos emerges also for a generalized structured connectivity with multiple input-output modes.


Assuntos
Simulação por Computador/estatística & dados numéricos , Dinâmica não Linear , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia
20.
Neuron ; 100(4): 876-890.e5, 2018 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-30473013

RESUMO

Simultaneous recordings of large populations of neurons in behaving animals allow detailed observation of high-dimensional, complex brain activity. However, experimental approaches often focus on singular behavioral paradigms or brain areas. Here, we recorded whole-brain neuronal activity of larval zebrafish presented with a battery of visual stimuli while recording fictive motor output. We identified neurons tuned to each stimulus type and motor output and discovered groups of neurons in the anterior hindbrain that respond to different stimuli eliciting similar behavioral responses. These convergent sensorimotor representations were only weakly correlated to instantaneous motor activity, suggesting that they critically inform, but do not directly generate, behavioral choices. To catalog brain-wide activity beyond explicit sensorimotor processing, we developed an unsupervised clustering technique that organizes neurons into functional groups. These analyses enabled a broad overview of the functional organization of the brain and revealed numerous brain nuclei whose neurons exhibit concerted activity patterns.


Assuntos
Química Encefálica/fisiologia , Encéfalo/fisiologia , Larva/fisiologia , Neurônios/fisiologia , Desempenho Psicomotor/fisiologia , Animais , Animais Geneticamente Modificados , Encéfalo/citologia , Larva/química , Larva/citologia , Atividade Motora/fisiologia , Neurônios/química , Optogenética/métodos , Estimulação Luminosa/métodos , Peixe-Zebra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA