Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 157
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cell ; 179(6): 1382-1392.e10, 2019 11 27.
Artículo en Inglés | MEDLINE | ID: mdl-31735497

RESUMEN

Distributing learning across multiple layers has proven extremely powerful in artificial neural networks. However, little is known about how multi-layer learning is implemented in the brain. Here, we provide an account of learning across multiple processing layers in the electrosensory lobe (ELL) of mormyrid fish and report how it solves problems well known from machine learning. Because the ELL operates and learns continuously, it must reconcile learning and signaling functions without switching its mode of operation. We show that this is accomplished through a functional compartmentalization within intermediate layer neurons in which inputs driving learning differentially affect dendritic and axonal spikes. We also find that connectivity based on learning rather than sensory response selectivity assures that plasticity at synapses onto intermediate-layer neurons is matched to the requirements of output neurons. The mechanisms we uncover have relevance to learning in the cerebellum, hippocampus, and cerebral cortex, as well as in artificial systems.


Asunto(s)
Pez Eléctrico/fisiología , Aprendizaje , Red Nerviosa/fisiología , Potenciales de Acción/fisiología , Estructuras Animales/citología , Estructuras Animales/fisiología , Animales , Axones/metabolismo , Fenómenos Biofísicos , Pez Eléctrico/anatomía & histología , Femenino , Masculino , Modelos Neurológicos , Plasticidad Neuronal , Conducta Predatoria , Sensación , Factores de Tiempo
2.
Cell ; 169(5): 956-969.e17, 2017 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-28502772

RESUMEN

Animals exhibit a behavioral response to novel sensory stimuli about which they have no prior knowledge. We have examined the neural and behavioral correlates of novelty and familiarity in the olfactory system of Drosophila. Novel odors elicit strong activity in output neurons (MBONs) of the α'3 compartment of the mushroom body that is rapidly suppressed upon repeated exposure to the same odor. This transition in neural activity upon familiarization requires odor-evoked activity in the dopaminergic neuron innervating this compartment. Moreover, exposure of a fly to novel odors evokes an alerting response that can also be elicited by optogenetic activation of α'3 MBONs. Silencing these MBONs eliminates the alerting behavior. These data suggest that the α'3 compartment plays a causal role in the behavioral response to novel and familiar stimuli as a consequence of dopamine-mediated plasticity at the Kenyon cell-MBONα'3 synapse.


Asunto(s)
Drosophila melanogaster/fisiología , Cuerpos Pedunculados/fisiología , Animales , Neuronas Dopaminérgicas/fisiología , Aprendizaje , Memoria , Cuerpos Pedunculados/citología , Odorantes , Olfato
3.
Cell ; 165(1): 220-233, 2016 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-26949187

RESUMEN

Documenting the extent of cellular diversity is a critical step in defining the functional organization of tissues and organs. To infer cell-type diversity from partial or incomplete transcription factor expression data, we devised a sparse Bayesian framework that is able to handle estimation uncertainty and can incorporate diverse cellular characteristics to optimize experimental design. Focusing on spinal V1 inhibitory interneurons, for which the spatial expression of 19 transcription factors has been mapped, we infer the existence of ~50 candidate V1 neuronal types, many of which localize in compact spatial domains in the ventral spinal cord. We have validated the existence of inferred cell types by direct experimental measurement, establishing this Bayesian framework as an effective platform for cell-type characterization in the nervous system and elsewhere.


Asunto(s)
Teorema de Bayes , Células de Renshaw/química , Células de Renshaw/citología , Médula Espinal/citología , Factores de Transcripción/análisis , Animales , Ratones , Células de Renshaw/clasificación , Transcriptoma
4.
Nature ; 626(8000): 808-818, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38326612

RESUMEN

Neuronal signals that are relevant for spatial navigation have been described in many species1-10. However, a circuit-level understanding of how such signals interact to guide navigational behaviour is lacking. Here we characterize a neuronal circuit in the Drosophila central complex that compares internally generated estimates of the heading and goal angles of the fly-both of which are encoded in world-centred (allocentric) coordinates-to generate a body-centred (egocentric) steering signal. Past work has suggested that the activity of EPG neurons represents the fly's moment-to-moment angular orientation, or heading angle, during navigation2,11. An animal's moment-to-moment heading angle, however, is not always aligned with its goal angle-that is, the allocentric direction in which it wishes to progress forward. We describe FC2 cells12, a second set of neurons in the Drosophila brain with activity that correlates with the fly's goal angle. Focal optogenetic activation of FC2 neurons induces flies to orient along experimenter-defined directions as they walk forward. EPG and FC2 neurons connect monosynaptically to a third neuronal class, PFL3 cells12,13. We found that individual PFL3 cells show conjunctive, spike-rate tuning to both the heading angle and the goal angle during goal-directed navigation. Informed by the anatomy and physiology of these three cell classes, we develop a model that explains how this circuit compares allocentric heading and goal angles to build an egocentric steering signal in the PFL3 output terminals. Quantitative analyses and optogenetic manipulations of PFL3 activity support the model. Finally, using a new navigational memory task, we show that flies expressing disruptors of synaptic transmission in subsets of PFL3 cells have a reduced ability to orient along arbitrary goal directions, with an effect size in quantitative accordance with the prediction of our model. The biological circuit described here reveals how two population-level allocentric signals are compared in the brain to produce an egocentric output signal that is appropriate for motor control.


Asunto(s)
Encéfalo , Drosophila melanogaster , Objetivos , Cabeza , Vías Nerviosas , Orientación Espacial , Navegación Espacial , Animales , Potenciales de Acción , Encéfalo/citología , Encéfalo/fisiología , Drosophila melanogaster/citología , Drosophila melanogaster/fisiología , Cabeza/fisiología , Locomoción , Neuronas/metabolismo , Optogenética , Orientación Espacial/fisiología , Percepción Espacial/fisiología , Memoria Espacial/fisiología , Navegación Espacial/fisiología , Transmisión Sináptica
5.
Nature ; 601(7891): 92-97, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34912112

RESUMEN

Many behavioural tasks require the manipulation of mathematical vectors, but, outside of computational models1-7, it is not known how brains perform vector operations. Here we show how the Drosophila central complex, a region implicated in goal-directed navigation7-10, performs vector arithmetic. First, we describe a neural signal in the fan-shaped body that explicitly tracks the allocentric travelling angle of a fly, that is, the travelling angle in reference to external cues. Past work has identified neurons in Drosophila8,11-13 and mammals14 that track the heading angle of an animal referenced to external cues (for example, head direction cells), but this new signal illuminates how the sense of space is properly updated when travelling and heading angles differ (for example, when walking sideways). We then characterize a neuronal circuit that performs an egocentric-to-allocentric (that is, body-centred to world-centred) coordinate transformation and vector addition to compute the allocentric travelling direction. This circuit operates by mapping two-dimensional vectors onto sinusoidal patterns of activity across distinct neuronal populations, with the amplitude of the sinusoid representing the length of the vector and its phase representing the angle of the vector. The principles of this circuit may generalize to other brains and to domains beyond navigation where vector operations or reference-frame transformations are required.


Asunto(s)
Encéfalo/fisiología , Señales (Psicología) , Drosophila melanogaster/fisiología , Matemática , Modelos Neurológicos , Memoria Espacial/fisiología , Navegación Espacial/fisiología , Animales , Encéfalo/citología , Drosophila melanogaster/citología , Femenino , Vuelo Animal , Objetivos , Cabeza/fisiología , Neuronas/fisiología , Percepción Espacial/fisiología , Caminata
6.
Proc Natl Acad Sci U S A ; 121(28): e2314511121, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38968113

RESUMEN

Humans and animals routinely infer relations between different items or events and generalize these relations to novel combinations of items. This allows them to respond appropriately to radically novel circumstances and is fundamental to advanced cognition. However, how learning systems (including the brain) can implement the necessary inductive biases has been unclear. We investigated transitive inference (TI), a classic relational task paradigm in which subjects must learn a relation ([Formula: see text] and [Formula: see text]) and generalize it to new combinations of items ([Formula: see text]). Through mathematical analysis, we found that a broad range of biologically relevant learning models (e.g. gradient flow or ridge regression) perform TI successfully and recapitulate signature behavioral patterns long observed in living subjects. First, we found that models with item-wise additive representations automatically encode transitive relations. Second, for more general representations, a single scalar "conjunctivity factor" determines model behavior on TI and, further, the principle of norm minimization (a standard statistical inductive bias) enables models with fixed, partly conjunctive representations to generalize transitively. Finally, neural networks in the "rich regime," which enables representation learning and improves generalization on many tasks, unexpectedly show poor generalization and anomalous behavior on TI. We find that such networks implement a form of norm minimization (over hidden weights) that yields a local encoding mechanism lacking transitivity. Our findings show how minimal statistical learning principles give rise to a classical relational inductive bias (transitivity), explain empirically observed behaviors, and establish a formal approach to understanding the neural basis of relational abstraction.


Asunto(s)
Generalización Psicológica , Humanos , Generalización Psicológica/fisiología , Aprendizaje/fisiología , Cognición/fisiología , Modelos Teóricos , Encéfalo/fisiología
7.
PLoS Comput Biol ; 20(4): e1011954, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38662797

RESUMEN

Relational cognition-the ability to infer relationships that generalize to novel combinations of objects-is fundamental to human and animal intelligence. Despite this importance, it remains unclear how relational cognition is implemented in the brain due in part to a lack of hypotheses and predictions at the levels of collective neural activity and behavior. Here we discovered, analyzed, and experimentally tested neural networks (NNs) that perform transitive inference (TI), a classic relational task (if A > B and B > C, then A > C). We found NNs that (i) generalized perfectly, despite lacking overt transitive structure prior to training, (ii) generalized when the task required working memory (WM), a capacity thought to be essential to inference in the brain, (iii) emergently expressed behaviors long observed in living subjects, in addition to a novel order-dependent behavior, and (iv) expressed different task solutions yielding alternative behavioral and neural predictions. Further, in a large-scale experiment, we found that human subjects performing WM-based TI showed behavior inconsistent with a class of NNs that characteristically expressed an intuitive task solution. These findings provide neural insights into a classical relational ability, with wider implications for how the brain realizes relational cognition.


Asunto(s)
Encéfalo , Cognición , Memoria a Corto Plazo , Redes Neurales de la Computación , Humanos , Cognición/fisiología , Encéfalo/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Biología Computacional , Masculino , Adulto , Femenino , Adulto Joven , Red Nerviosa/fisiología , Análisis y Desempeño de Tareas
8.
Nature ; 576(7785): 126-131, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31748750

RESUMEN

Many animals rely on an internal heading representation when navigating in varied environments1-10. How this representation is linked to the sensory cues that define different surroundings is unclear. In the fly brain, heading is represented by 'compass' neurons that innervate a ring-shaped structure known as the ellipsoid body3,11,12. Each compass neuron receives inputs from 'ring' neurons that are selective for particular visual features13-16; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity17-22, which flexibly transforms two-dimensional visual cues into a stable heading representation. We also describe how this plasticity enables the fly to convert a partial heading representation, established from orienting within part of a novel setting, into a complete heading representation. Our results provide mechanistic insight into the memory-related computations that are essential for flexible navigation in varied surroundings.


Asunto(s)
Percepción Visual , Animales , Calcio/fisiología , Drosophila melanogaster , Cabeza , Plasticidad Neuronal , Neuronas/fisiología , Optogenética , Orientación Espacial
9.
Phys Rev Lett ; 131(11): 118401, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-37774280

RESUMEN

Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.

10.
PLoS Comput Biol ; 18(12): e1010759, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36516226

RESUMEN

Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits do not exhibit any obvious candidates for this role; instead they are highly recurrent. We investigate whether a sparsely connected recurrent neural network (RNN) can perform classification in a distributed manner without ever bringing all of the relevant information to a single convergence site. Our model is based on a sparse RNN that performs classification dynamically. Specifically, the interconnections of the RNN are trained to resonantly amplify the magnitude of responses to some external inputs but not others. The amplified and non-amplified responses then form the basis for binary classification. Furthermore, the network acts as an evidence accumulator and maintains its decision even after the input is turned off. Despite highly sparse connectivity, learned recurrent connections allow input information to flow to every neuron of the RNN, providing the basis for distributed computation. In this arrangement, the minimum number of synapses per neuron required to reach maximum memory capacity scales only logarithmically with network size. The model is robust to various types of noise, works with different activation and loss functions and with both backpropagation- and Hebbian-based learning rules. The RNN can also be constructed with a split excitation-inhibition architecture with little reduction in performance.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Aprendizaje/fisiología , Neuronas/fisiología , Sinapsis/fisiología
11.
PLoS Comput Biol ; 18(2): e1008836, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35139071

RESUMEN

Cortical circuits generate excitatory currents that must be cancelled by strong inhibition to assure stability. The resulting excitatory-inhibitory (E-I) balance can generate spontaneous irregular activity but, in standard balanced E-I models, this requires that an extremely strong feedforward bias current be included along with the recurrent excitation and inhibition. The absence of experimental evidence for such large bias currents inspired us to examine an alternative regime that exhibits asynchronous activity without requiring unrealistically large feedforward input. In these networks, irregular spontaneous activity is supported by a continually changing sparse set of neurons. To support this activity, synaptic strengths must be drawn from high-variance distributions. Unlike standard balanced networks, these sparse balance networks exhibit robust nonlinear responses to uniform inputs and non-Gaussian input statistics. Interestingly, the speed, not the size, of synaptic fluctuations dictates the degree of sparsity in the model. In addition to simulations, we provide a mean-field analysis to illustrate the properties of these networks.


Asunto(s)
Corteza Cerebral , Modelos Neurológicos , Red Nerviosa , Neuronas , Potenciales Sinápticos/fisiología , Animales , Corteza Cerebral/citología , Corteza Cerebral/fisiología , Biología Computacional , Red Nerviosa/citología , Red Nerviosa/fisiología , Neuronas/citología , Neuronas/fisiología
12.
PLoS Comput Biol ; 18(12): e1010590, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36469504

RESUMEN

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.


Asunto(s)
Modelos Neurológicos , Red Nerviosa , Potenciales de Acción/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Aprendizaje
13.
Nature ; 548(7666): 175-182, 2017 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-28796202

RESUMEN

Associating stimuli with positive or negative reinforcement is essential for survival, but a complete wiring diagram of a higher-order circuit supporting associative memory has not been previously available. Here we reconstruct one such circuit at synaptic resolution, the Drosophila larval mushroom body. We find that most Kenyon cells integrate random combinations of inputs but that a subset receives stereotyped inputs from single projection neurons. This organization maximizes performance of a model output neuron on a stimulus discrimination task. We also report a novel canonical circuit in each mushroom body compartment with previously unidentified connections: reciprocal Kenyon cell to modulatory neuron connections, modulatory neuron to output neuron connections, and a surprisingly high number of recurrent connections between Kenyon cells. Stereotyped connections found between output neurons could enhance the selection of learned behaviours. The complete circuit map of the mushroom body should guide future functional studies of this learning and memory centre.


Asunto(s)
Encéfalo/citología , Encéfalo/fisiología , Conectoma , Drosophila melanogaster/citología , Drosophila melanogaster/fisiología , Memoria/fisiología , Animales , Retroalimentación Fisiológica , Femenino , Larva/citología , Larva/fisiología , Cuerpos Pedunculados/citología , Cuerpos Pedunculados/fisiología , Vías Nerviosas , Sinapsis/metabolismo
14.
Nat Methods ; 15(10): 805-815, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30224673

RESUMEN

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.


Asunto(s)
Potenciales de Acción , Algoritmos , Modelos Neurológicos , Corteza Motora/fisiología , Neuronas/fisiología , Animales , Humanos , Masculino , Persona de Mediana Edad , Dinámica Poblacional , Primates
15.
Nature ; 509(7498): 43-8, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24784215

RESUMEN

The precision of skilled movement depends on sensory feedback and its refinement by local inhibitory microcircuits. One specialized set of spinal GABAergic interneurons forms axo-axonic contacts with the central terminals of sensory afferents, exerting presynaptic inhibitory control over sensory-motor transmission. The inability to achieve selective access to the GABAergic neurons responsible for this unorthodox inhibitory mechanism has left unresolved the contribution of presynaptic inhibition to motor behaviour. We used Gad2 as a genetic entry point to manipulate the interneurons that contact sensory terminals, and show that activation of these interneurons in mice elicits the defining physiological characteristics of presynaptic inhibition. Selective genetic ablation of Gad2-expressing interneurons severely perturbs goal-directed reaching movements, uncovering a pronounced and stereotypic forelimb motor oscillation, the core features of which are captured by modelling the consequences of sensory feedback at high gain. Our findings define the neural substrate of a genetically hardwired gain control system crucial for the smooth execution of movement.


Asunto(s)
Retroalimentación Sensorial/fisiología , Destreza Motora/fisiología , Movimiento/fisiología , Inhibición Neural/fisiología , Terminales Presinápticos/fisiología , Médula Espinal/fisiología , Animales , Axones/fisiología , Vías Eferentes/fisiología , Femenino , Miembro Anterior/fisiología , Neuronas GABAérgicas/citología , Neuronas GABAérgicas/metabolismo , Glutamato Descarboxilasa/genética , Glutamato Descarboxilasa/metabolismo , Interneuronas/citología , Interneuronas/metabolismo , Masculino , Ratones , Modelos Neurológicos , Neurotransmisores/metabolismo
16.
Proc Natl Acad Sci U S A ; 114(44): E9366-E9375, 2017 10 31.
Artículo en Inglés | MEDLINE | ID: mdl-29042519

RESUMEN

Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Cerebral/fisiología , Red Nerviosa/fisiología , Neuronas/fisiología , Animales , Aprendizaje/fisiología , Memoria/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Sinapsis/fisiología
17.
Nature ; 497(7447): 113-7, 2013 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-23615618

RESUMEN

The mushroom body in the fruitfly Drosophila melanogaster is an associative brain centre that translates odour representations into learned behavioural responses. Kenyon cells, the intrinsic neurons of the mushroom body, integrate input from olfactory glomeruli to encode odours as sparse distributed patterns of neural activity. We have developed anatomic tracing techniques to identify the glomerular origin of the inputs that converge onto 200 individual Kenyon cells. Here we show that each Kenyon cell integrates input from a different and apparently random combination of glomeruli. The glomerular inputs to individual Kenyon cells show no discernible organization with respect to their odour tuning, anatomic features or developmental origins. Moreover, different classes of Kenyon cells do not seem to preferentially integrate inputs from specific combinations of glomeruli. This organization of glomerular connections to the mushroom body could allow the fly to contextualize novel sensory experiences, a feature consistent with the role of this brain centre in mediating learned olfactory associations and behaviours.


Asunto(s)
Drosophila melanogaster/fisiología , Cuerpos Pedunculados/fisiología , Vías Olfatorias/fisiología , Olfato/fisiología , Animales , Antenas de Artrópodos/anatomía & histología , Antenas de Artrópodos/inervación , Antenas de Artrópodos/fisiología , Colorantes , Drosophila melanogaster/anatomía & histología , Drosophila melanogaster/citología , Femenino , Aprendizaje/fisiología , Masculino , Modelos Neurológicos , Cuerpos Pedunculados/anatomía & histología , Cuerpos Pedunculados/citología , Técnicas de Trazados de Vías Neuroanatómicas , Neuronas/fisiología , Odorantes/análisis , Vías Olfatorias/citología , Coloración y Etiquetado
18.
PLoS Comput Biol ; 12(3): e1004750, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26939080

RESUMEN

Spike-timing dependent plasticity (STDP) is a widespread plasticity mechanism in the nervous system. The simplest description of STDP only takes into account pairs of pre- and postsynaptic spikes, with potentiation of the synapse when a presynaptic spike precedes a postsynaptic spike and depression otherwise. In light of experiments that explored a variety of spike patterns, the pair-based STDP model has been augmented to account for multiple pre- and postsynaptic spike interactions. As a result, a number of different "multi-spike" STDP models have been proposed based on different experimental observations. The behavior of these models at the population level is crucial for understanding mechanisms of learning and memory. The challenging balance between the stability of a population of synapses and their competitive modification is well studied for pair-based models, but it has not yet been fully analyzed for multi-spike models. Here, we address this issue through numerical simulations of an integrate-and-fire model neuron with excitatory synapses subject to STDP described by three different proposed multi-spike models. We also analytically calculate average synaptic changes and fluctuations about these averages. Our results indicate that the different multi-spike models behave quite differently at the population level. Although each model can produce synaptic competition in certain parameter regions, none of them induces synaptic competition with its originally fitted parameters. The dichotomy between synaptic stability and Hebbian competition, which is well characterized for pair-based STDP models, persists in multi-spike models. However, anti-Hebbian competition can coexist with synaptic stability in some models. We propose that the collective behavior of synaptic plasticity models at the population level should be used as an additional guideline in applying phenomenological models based on observations of single synapses.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Transmisión Sináptica/fisiología , Simulación por Computador , Humanos , Modelos Estadísticos , Inhibición Neural/fisiología
19.
PLoS Comput Biol ; 12(5): e1004910, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27224735

RESUMEN

Neuronal responses characterized by regular tuning curves are typically assumed to arise from structured synaptic connectivity. However, many responses exhibit both regular and irregular components. To address the relationship between tuning curve properties and underlying circuitry, we analyzed neuronal activity recorded from primary motor cortex (M1) of monkeys performing a 3D arm posture control task and compared the results with a neural network model. Posture control is well suited for examining M1 neuronal tuning because it avoids the dynamic complexity of time-varying movements. As a function of hand position, the neuronal responses have a linear component, as has previously been described, as well as heterogeneous and highly irregular nonlinearities. These nonlinear components involve high spatial frequencies and therefore do not support explicit encoding of movement parameters. Yet both the linear and nonlinear components contribute to the decoding of EMG of major muscles used in the task. Remarkably, despite the presence of a strong linear component, a feedforward neural network model with entirely random connectivity can replicate the data, including both the mean and distributions of the linear and nonlinear components as well as several other features of the neuronal responses. This result shows that smoothness provided by the regularity in the inputs to M1 can impose apparent structure on neural responses, in this case a strong linear (also known as cosine) tuning component, even in the absence of ordered synaptic connectivity.


Asunto(s)
Modelos Neurológicos , Corteza Motora/fisiología , Postura/fisiología , Animales , Brazo/fisiología , Biología Computacional , Electromiografía , Retroalimentación Sensorial , Femenino , Macaca fascicularis , Masculino , Corteza Motora/citología , Movimiento/fisiología , Redes Neurales de la Computación , Neuronas/fisiología , Interfaz Usuario-Computador
20.
Neural Comput ; 26(10): 2163-93, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25058706

RESUMEN

We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a "plant," the system that performs the task. However, the low-level controller may be able to solve only fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are used only during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable subtasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks or to be extended for more complex tasks without retraining lower-levels.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA