Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Neural Comput ; 36(1): 33-74, 2023 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-38052088

RESUMEN

Under difficult viewing conditions, the brain's visual system uses a variety of recurrent modulatory mechanisms to augment feedforward processing. One resulting phenomenon is contour integration, which occurs in the primary visual (V1) cortex and strengthens neural responses to edges if they belong to a larger smooth contour. Computational models have contributed to an understanding of the circuit mechanisms of contour integration, but less is known about its role in visual perception. To address this gap, we embedded a biologically grounded model of contour integration in a task-driven artificial neural network and trained it using a gradient-descent variant. We used this model to explore how brain-like contour integration may be optimized for high-level visual objectives as well as its potential roles in perception. When the model was trained to detect contours in a background of random edges, a task commonly used to examine contour integration in the brain, it closely mirrored the brain in terms of behavior, neural responses, and lateral connection patterns. When trained on natural images, the model enhanced weaker contours and distinguished whether two points lay on the same versus different contours. The model learned robust features that generalized well to out-of-training-distribution stimuli. Surprisingly, and in contrast with the synthetic task, a parameter-matched control network without recurrence performed the same as or better than the model on the natural-image tasks. Thus, a contour integration mechanism is not essential to perform these more naturalistic contour-related tasks. Finally, the best performance in all tasks was achieved by a modified contour integration model that did not distinguish between excitatory and inhibitory neurons.


Asunto(s)
Percepción de Forma , Corteza Visual , Corteza Visual/fisiología , Estimulación Luminosa/métodos , Percepción de Forma/fisiología , Percepción Visual/fisiología , Aprendizaje
2.
PLoS Comput Biol ; 18(9): e1010427, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36067234

RESUMEN

Convolutional neural networks trained on object recognition derive inspiration from the neural architecture of the visual system in mammals, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the deep hierarchical organization of primates, the visual system of the mouse has a shallower arrangement. Since mice and primates are both capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex. The architecture and structural parameters of the network are derived from experimental measurements, specifically the 100-micrometer resolution interareal connectome, the estimates of numbers of neurons in each area and cortical layer, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. Using a well-studied image classification task as our working example, we demonstrate the computational capability of this mouse-sized network. Given its relatively small size, MouseNet achieves roughly 2/3rds the performance level on ImageNet as VGG16. In combination with the large scale Allen Brain Observatory Visual Coding dataset, we use representational similarity analysis to quantify the extent to which MouseNet recapitulates the neural representation in mouse visual cortex. Importantly, we provide evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point. We demonstrate that the distributions of some physiological quantities are closer to the observed distributions in the mouse brain after task training. We encourage the use of the MouseNet architecture by making the code freely available.


Asunto(s)
Redes Neurales de la Computación , Corteza Visual , Animales , Mamíferos , Ratones , Neuronas/fisiología , Corteza Visual/fisiología , Percepción Visual
3.
Neural Comput ; 31(8): 1551-1591, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31260392

RESUMEN

Deep convolutional neural networks (CNNs) have certain structural, mechanistic, representational, and functional parallels with primate visual cortex and also many differences. However, perhaps some of the differences can be reconciled. This study develops a cortex-like CNN architecture, via (1) a loss function that quantifies the consistency of a CNN architecture with neural data from tract tracing, cell reconstruction, and electrophysiology studies; (2) a hyperparameter-optimization approach for reducing this loss, and (3) heuristics for organizing units into convolutional-layer grids. The optimized hyperparameters are consistent with neural data. The cortex-like architecture differs from typical CNN architectures. In particular, it has longer skip connections, larger kernels and strides, and qualitatively different connection sparsity. Importantly, layers of the cortex-like network have one-to-one correspondences with cortical neuron populations. This should allow unambiguous comparison of model and brain representations in the future and, consequently, more precise measurement of progress toward more biologically realistic deep networks.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Corteza Visual , Animales , Macaca , Vías Nerviosas/citología , Vías Nerviosas/fisiología , Neuronas/citología , Neuronas/fisiología , Corteza Visual/citología , Corteza Visual/fisiología
4.
Neural Comput ; 27(6): 1186-222, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25774544

RESUMEN

Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.


Asunto(s)
Potenciales de Acción/fisiología , Encéfalo/fisiología , Simulación por Computador , Modelos Neurológicos , Redes Neurales de la Computación , Humanos , Red Nerviosa/fisiología , Neuronas/fisiología
5.
Neuroimage ; 180(Pt A): 114-116, 2018 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-29292137
6.
Neural Comput ; 24(4): 867-94, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22168562

RESUMEN

Response variability is often positively correlated in pairs of similarly tuned neurons in the visual cortex. Many authors have considered correlated variability to prevent postsynaptic neurons from averaging across large groups of inputs to obtain reliable stimulus estimates. However, a simple average of variability ignores nonlinearities in cortical signal integration. This study shows that feedforward divisive normalization of a neuron's inputs effectively decorrelates their variability. Furthermore, we show that optimal linear estimates of a stimulus parameter that are based on normalized inputs are more accurate than those based on nonnormalized inputs, due partly to reduced correlations, and that these estimates improve with increasing population size up to several thousand neurons. This suggests that neurons may possess a simple mechanism for substantially decorrelating noise in their inputs. Further work is needed to reconcile this conclusion with past evidence that correlated noise impairs visual perception.


Asunto(s)
Neuronas/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Humanos , Modelos Neurológicos
7.
Neurorehabil Neural Repair ; 35(3): 290-299, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33559531

RESUMEN

BACKGROUND: Freezing of gait (FOG) is arguably the most disabling motor symptom experienced with Parkinson's disease (PD), but treatments are extremely limited due to our poor understanding of the underlying mechanisms. Three cortical domains are postulated in recent research (ie, the cognitive, limbic, and sensorimotor domains), thus, treatments targeting these mechanisms of FOG may potentially be effective. Cognitive training, cognitive behavioral therapy (CBT, a well-known anxiety intervention), and proprioceptive training may address the cognitive, limbic, and sensorimotor domains, respectively. OBJECTIVE: To investigate whether these 3 treatments could improve functional outcomes of FOG. METHODS: In a single-blind, randomized crossover design, 15 individuals with PD and FOG were randomized into different, counterbalanced orders of receiving the interventions. Each consisted of eight 1-hour sessions, twice weekly for 4 weeks. FOG severity was assessed as the primary outcome using a novel gait paradigm that was aimed at evoking FOG when the cognitive, limbic, or sensorimotor domains were independently challenged. RESULTS: FOG severity significantly improved after the cognitive intervention, with strong trends toward improvement specifically in the baseline and cognitive-challenge assessment conditions. CBT, as the anxiety intervention, resulted in significantly worse FOG severity. In contrast, proprioceptive training significantly improved FOG severity, with consistent trends across all conditions. CONCLUSIONS: The cognitive and proprioceptive treatments appeared to improve different aspects of FOG. Thus, either of these interventions could potentially be a viable treatment for FOG. However, although the results were statistically significant, they could be sensitive to the relatively small number of participants in the study. Considering the significant results together with nonsignificant trends in both FOG and gait measures, and given equal time for each intervention, proprioceptive training produced the most consistent indications of benefits in this study. (clinicaltrials.gov NCT03065127).


Asunto(s)
Ansiedad/rehabilitación , Terapia Cognitivo-Conductual , Disfunción Cognitiva/rehabilitación , Remediación Cognitiva , Trastornos Neurológicos de la Marcha/rehabilitación , Rehabilitación Neurológica , Enfermedad de Parkinson/rehabilitación , Propiocepción , Trastornos de la Sensación/rehabilitación , Anciano , Anciano de 80 o más Años , Ansiedad/etiología , Disfunción Cognitiva/etiología , Estudios Cruzados , Femenino , Trastornos Neurológicos de la Marcha/etiología , Humanos , Sistema Límbico/fisiopatología , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Evaluación de Resultado en la Atención de Salud , Enfermedad de Parkinson/complicaciones , Propiocepción/fisiología , Trastornos de la Sensación/etiología , Índice de Severidad de la Enfermedad , Método Simple Ciego
8.
Neural Comput ; 22(3): 621-59, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19922294

RESUMEN

Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.


Asunto(s)
Redes Neurales de la Computación , Percepción del Tiempo , Potenciales de Acción , Algoritmos , Simulación por Computador , Discriminación en Psicología/fisiología , Humanos , Potenciales de la Membrana/fisiología , Neuronas/fisiología , Sinapsis/fisiología , Factores de Tiempo , Percepción del Tiempo/fisiología
9.
Neural Netw ; 121: 122-131, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31541880

RESUMEN

Neurons in the primate middle temporal area (MT) respond to moving stimuli, with strong tuning for motion speed and direction. These responses have been characterized in detail, but the functional significance of these details (e.g. shapes and widths of speed tuning curves) is unclear, because they cannot be selectively manipulated. To estimate their functional significance, we used a detailed model of MT population responses as input to convolutional networks that performed sophisticated motion processing tasks (visual odometry and gesture recognition). We manipulated the distributions of speed and direction tuning widths, and studied the effects on task performance. We also studied performance with random linear mixtures of the responses, and with responses that had the same representational dissimilarity as the model populations, but were otherwise randomized. The width of speed and direction tuning both affected task performance, despite the networks having been optimized individually for each tuning variation, but the specific effects were different in each task. Random linear mixing improved performance of the odometry task, but not the gesture recognition task. Randomizing the responses while maintaining representational dissimilarity resulted in poor odometry performance. In summary, despite full optimization of the deep networks in each case, each manipulation of the representation affected performance of sophisticated visual tasks. Representation properties such as tuning width and representational similarity have been studied extensively from other perspectives, but this work provides new insight into their possible roles in sophisticated visual inference.


Asunto(s)
Percepción de Movimiento/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Estimulación Luminosa/métodos , Lóbulo Temporal/fisiología , Animales , Movimiento (Física) , Neuronas/fisiología
10.
Neural Netw ; 108: 424-444, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-30312959

RESUMEN

Neurons in the primate middle temporal area (MT) encode information about visual motion and binocular disparity. MT has been studied intensively for decades, so there is a great deal of information in the literature about MT neuron tuning. In this study, our goal is to consolidate some of this information into a statistical model of the MT population response. The model accepts arbitrary stereo video as input. It uses computer-vision methods to calculate known correlates of the responses (such as motion velocity), and then predicts activity using a combination of tuning functions that have previously been used to describe data in various experiments. To construct the population response, we also estimate the distributions of many model parameters from data in the electrophysiology literature. We show that the model accounts well for a separate dataset of MT speed tuning that was not used in developing the model. The model may be useful for studying relationships between MT activity and behavior in ethologically relevant tasks. As an example, we show that the model can provide regression targets for internal activity in a deep convolutional network that performs a visual odometry task, so that its representations become more physiologically realistic.


Asunto(s)
Modelos Biológicos , Percepción de Movimiento , Reconocimiento Visual de Modelos , Estimulación Luminosa/métodos , Grabación en Video/métodos , Corteza Visual , Animales , Humanos , Percepción de Movimiento/fisiología , Neuronas/fisiología , Reconocimiento Visual de Modelos/fisiología , Primates , Disparidad Visual/fisiología , Corteza Visual/fisiología , Vías Visuales/fisiología
11.
Neural Netw ; 87: 8-21, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28039780

RESUMEN

There are compelling computational models of many properties of the primate ventral visual stream, but a gap remains between the models and the physiology. To facilitate ongoing refinement of these models, we have compiled diverse information from the electrophysiology literature into a statistical model of inferotemporal (IT) cortex responses. This is a purely descriptive model, so it has little explanatory power. However it is able to directly incorporate a rich and extensible set of tuning properties. So far, we have approximated tuning curves and statistics of tuning diversity for occlusion, clutter, size, orientation, position, and object selectivity in early versus late response phases. We integrated the model with the V-REP simulator, which provides stimulus properties in a simulated physical environment. In contrast with the empirical model presented here, mechanistic models are ultimately more useful for understanding neural systems. However, a detailed empirical model may be useful as a source of labeled data for optimizing and validating mechanistic models, or as a source of input to models of other brain areas.


Asunto(s)
Investigación Empírica , Estimulación Luminosa/métodos , Lóbulo Temporal/fisiología , Animales , Corteza Cerebral/fisiología , Simulación por Computador , Macaca , Neuronas/fisiología , Orientación/fisiología , Corteza Visual/fisiología , Vías Visuales/fisiología
12.
Neural Netw ; 90: 29-41, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28388471

RESUMEN

The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks.


Asunto(s)
Redes Neurales de la Computación , Estimulación Luminosa/métodos , Visión Binocular , Animales , Simulación por Computador , Humanos , Neuronas/fisiología , Orientación/fisiología , Visión Binocular/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología
13.
Neural Netw ; 77: 95-106, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26963256

RESUMEN

In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex.


Asunto(s)
Algoritmos , Simulación por Computador , Inhibición Neural , Redes Neurales de la Computación , Encéfalo/fisiología , Red Nerviosa/fisiología
14.
Front Comput Neurosci ; 10: 15, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26973503

RESUMEN

A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

15.
IEEE Trans Neural Syst Rehabil Eng ; 12(1): 140-52, 2004 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-15068197

RESUMEN

Stepping reactions are often triggered rapidly in response to loss of balance. It has been unclear whether spatial step parameters are defined at time of step-initiation or whether they can be modulated online, during step execution, in response to sensory feedback about the evolving state of instability. This study explored the capacity to actively alter step direction subsequent to step initiation in six healthy young-adult subjects. To elicit forward-step reactions, subjects were released suddenly from a tethered forward lean. A second perturbation (medio-lateral support-surface translation) was applied at lags of 0-200 ms. Active reaction to the second perturbation was determined primarily through analysis of swing-leg hip-abductor activation. In addition, to gauge the biomechanical consequence of the changes in muscle activation, we compared the measured medio-lateral swing-foot displacement to that predicted by a simple passive mechanical model. Perturbations at 0-100 ms lag evoked active medio-lateral swing-foot deviation, allowing balance to be recovered with a single step. However, when the second perturbation occurred near foot-off (200-ms lag), there was no evidence of active alteration of step direction and subjects typically required additional steps to recover balance. The results suggest that step direction can be reparameterized during early stages of stepping reactions, but that step direction was not actively modulated in response to perturbation arising near start of swing phase.


Asunto(s)
Retroalimentación/fisiología , Pierna/fisiología , Modelos Biológicos , Movimiento , Músculo Esquelético/fisiología , Estimulación Física/métodos , Equilibrio Postural/fisiología , Postura/fisiología , Aceleración , Adaptación Fisiológica/fisiología , Adulto , Electromiografía , Femenino , Humanos , Masculino , Contracción Muscular/fisiología , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Estrés Mecánico , Torque
16.
Front Comput Neurosci ; 8: 132, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25386134

RESUMEN

The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

17.
Artículo en Inglés | MEDLINE | ID: mdl-22586391

RESUMEN

This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

18.
Front Neuroinform ; 3: 7, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19352442

RESUMEN

Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.

19.
Cereb Cortex ; 17(8): 1830-40, 2007 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-17043082

RESUMEN

Fine temporal patterns of firing in much of the brain are highly irregular. In some circuits, the precise pattern of irregularity contains information beyond that contained in mean firing rates. However, the capacity of neural circuits to use this additional information for computational purposes is not well understood. Here we employ computational methods to show that an ensemble of neurons firing at a constant mean rate can induce arbitrarily chosen temporal current patterns in postsynaptic cells. If the presynaptic neurons fire with nearly uniform interspike intervals, then current patterns are sensitive to variations in spike timing. But irregular, Poisson-like firing can drive current patterns robustly, even if spike timing varies by tens of milliseconds from trial to trial. Notably, irregular firing patterns can drive useful patterns of current even if they are so variable that several hundred repeated experimental trials would be needed to distinguish them from random firing. Together, these results describe an unrestrictive set of conditions in which postsynaptic cells might exploit virtually any information contained in spike timing. We speculate as to how this capability may underlie an extension of population coding to the temporal domain.


Asunto(s)
Potenciales Postsinápticos Excitadores/fisiología , Neuronas/fisiología , Algoritmos , Simulación por Computador , Electrofisiología , Modelos Neurológicos , Red Nerviosa , Conducción Nerviosa/fisiología , Distribución Normal , Distribución de Poisson , Programas Informáticos , Sinapsis/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA