RESUMEN
Motor learning involves a widespread brain network including the basal ganglia, cerebellum, motor cortex, and brainstem. Despite its importance, little is known about how this network learns motor tasks and which role different parts of this network take. We designed a systems-level computational model of motor learning, including a cortex-basal ganglia motor loop and the cerebellum that both determine the response of central pattern generators in the brainstem. First, we demonstrate its ability to learn arm movements toward different motor goals. Second, we test the model in a motor adaptation task with cognitive control, where the model replicates human data. We conclude that the cortex-basal ganglia loop learns via a novelty-based motor prediction error to determine concrete actions given a desired outcome, and that the cerebellum minimizes the remaining aiming error.
Asunto(s)
Ganglios Basales , Cerebelo , Humanos , Cerebelo/fisiología , Ganglios Basales/fisiología , Encéfalo/fisiología , Aprendizaje/fisiología , Movimiento/fisiologíaRESUMEN
[This corrects the article DOI: 10.1371/journal.pcbi.1011024.].
RESUMEN
In addition to the prefrontal cortex (PFC), the basal ganglia (BG) have been increasingly often reported to play a fundamental role in category learning, but the circuit mechanisms mediating their interaction remain to be explored. We developed a novel neurocomputational model of category learning that particularly addresses the BG-PFC interplay. We propose that the BG bias PFC activity by removing the inhibition of cortico-thalamo-cortical loop and thereby provide a teaching signal to guide the acquisition of category representations in the corticocortical associations to the PFC. Our model replicates key behavioral and physiological data of macaque monkey learning a prototype distortion task from Antzoulatos and Miller (2011) Our simulations allowed us to gain a deeper insight into the observed drop of category selectivity in striatal neurons seen in the experimental data and in the model. The simulation results and a new analysis of the experimental data based on the model's predictions show that the drop in category selectivity of the striatum emerges as the variability of responses in the striatum rises when confronting the BG with an increasingly larger number of stimuli to be classified. The neurocomputational model therefore provides new testable insights of systems-level brain circuits involved in category learning that may also be generalized to better understand other cortico-BG-cortical loops.SIGNIFICANCE STATEMENT Inspired by the idea that basal ganglia (BG) teach the prefrontal cortex (PFC) to acquire category representations, we developed a novel neurocomputational model and tested it on a task that was recently applied in monkey experiments. As an advantage over previous models of category learning, our model allows to compare simulation data with single-cell recordings in PFC and BG. We not only derived model predictions, but already verified a prediction to explain the observed drop in striatal category selectivity. When testing our model with a simple, real-world face categorization task, we observed that the fast striatal learning with a performance of 85% correct responses can teach the slower PFC learning to push the model performance up to almost 100%.
Asunto(s)
Ganglios Basales/fisiología , Simulación por Computador/clasificación , Aprendizaje/fisiología , Modelos Teóricos , Estimulación Luminosa/métodos , Corteza Prefrontal/fisiología , Animales , Simulación por Computador/tendencias , Femenino , Humanos , Vías Nerviosas/fisiologíaRESUMEN
In Parkinson's disease, a loss of dopamine neurons causes severe motor impairments. These motor impairments have long been thought to result exclusively from immediate effects of dopamine loss on neuronal firing in basal ganglia, causing imbalances of basal ganglia pathways. However, motor impairments and pathway imbalances may also result from dysfunctional synaptic plasticity - a novel concept of how Parkinsonian symptoms evolve. Here we built a neuro-computational model that allows us to simulate the effects of dopamine loss on synaptic plasticity in basal ganglia. Our simulations confirm that dysfunctional synaptic plasticity can indeed explain the emergence of both motor impairments and pathway imbalances in Parkinson's disease, thus corroborating the novel concept. By predicting that dysfunctional plasticity results not only in reduced activation of desired responses, but also in their active inhibition, our simulations provide novel testable predictions. When simulating dopamine replacement therapy (which is a standard treatment in clinical practice), we observe a new balance of pathway outputs, rather than a simple restoration of non-Parkinsonian states. In addition, high doses of replacement are shown to result in overshooting motor activity, in line with empirical evidence. Finally, our simulations provide an explanation for the intensely debated paradox that focused basal ganglia lesions alleviate Parkinsonian symptoms, but do not impair performance in healthy animals. Overall, our simulations suggest that the effects of dopamine loss on synaptic plasticity play an essential role in the development of Parkinsonian symptoms, thus arguing for a re-conceptualisation of Parkinsonian pathophysiology.
Asunto(s)
Modelos Neurológicos , Plasticidad Neuronal , Enfermedad de Parkinson/fisiopatología , Transmisión Sináptica , Ganglios Basales/patología , Ganglios Basales/fisiopatología , Dopaminérgicos/uso terapéutico , Neuronas Dopaminérgicas/fisiología , Humanos , Enfermedad de Parkinson/tratamiento farmacológicoRESUMEN
Modern parallel hardware such as multi-core processors (CPUs) and graphics processing units (GPUs) have a high computational power which can be greatly beneficial to the simulation of large-scale neural networks. Over the past years, a number of efforts have focused on developing parallel algorithms and simulators best suited for the simulation of spiking neural models. In this article, we aim at investigating the advantages and drawbacks of the CPU and GPU parallelization of mean-firing rate neurons, widely used in systems-level computational neuroscience. By comparing OpenMP, CUDA and OpenCL implementations towards a serial CPU implementation, we show that GPUs are better suited than CPUs for the simulation of very large networks, but that smaller networks would benefit more from an OpenMP implementation. As this performance strongly depends on data organization, we analyze the impact of various factors such as data structure, memory alignment and floating precision. We then discuss the suitability of the different hardware depending on the networks' size and connectivity, as random or sparse connectivities in mean-firing rate networks tend to break parallel performance on GPUs due to the violation of coalescence.
Asunto(s)
Potenciales de Acción/fisiología , Gráficos por Computador/instrumentación , Simulación por Computador , Modelos Neurológicos , Red Nerviosa/fisiología , Procesamiento de Señales Asistido por Computador/instrumentación , Programas Informáticos , Algoritmos , Animales , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Lenguajes de ProgramaciónRESUMEN
Devaluation protocols reveal that Tourette patients show an increased propensity to habitual behaviors as they continue to respond to devalued outcomes in a cognitive stimulus-response-outcome association task. We use a neuro-computational model of hierarchically organized cortico-basal ganglia-thalamo-cortical loops to shed more light on habit formation and its alteration in Tourette patients. In our model, habitual behavior emerges from cortico-thalamic shortcut connections, where enhanced habit formation can be linked to faster plasticity in the shortcut or to a stronger feedback from the shortcut to the basal ganglia. We explore two major hypotheses of Tourette pathophysiology-local striatal disinhibition and increased dopaminergic modulation of striatal medium spiny neurons-as causes for altered shortcut activation. Both model changes altered shortcut functioning and resulted in higher rates of responses towards devalued outcomes, similar to what is observed in Tourette patients. We recommend future experimental neuroscientific studies to locate shortcuts between cortico-basal ganglia-thalamo-cortical loops in the human brain and study their potential role in health and disease.
Asunto(s)
Ganglios Basales , Tálamo , Ganglios Basales/fisiología , Encéfalo , Cuerpo Estriado , Hábitos , Humanos , Tálamo/fisiologíaRESUMEN
Modern neuro-simulators provide efficient implementations of simulation kernels on various parallel hardware (multi-core CPUs, distributed CPUs, GPUs), thereby supporting the simulation of increasingly large and complex biologically realistic networks. However, the optimal configuration of the parallel hardware and computational kernels depends on the exact structure of the network to be simulated. For example, the computation time of rate-coded neural networks is generally limited by the available memory bandwidth, and consequently, the organization of the data in memory will strongly influence the performance for different connectivity matrices. We pinpoint the role of sparse matrix formats implemented in the neuro-simulator ANNarchy with respect to computation time. Rather than asking the user to identify the best data structures required for a given network and platform, such a decision could also be carried out by the neuro-simulator. However, it requires heuristics that need to be adapted over time for the available hardware. The present study investigates how machine learning methods can be used to identify appropriate implementations for a specific network. We employ an artificial neural network to develop a predictive model to help the developer select the optimal sparse matrix format. The model is first trained offline using a set of training examples on a particular hardware platform. The learned model can then predict the execution time of different matrix formats and decide on the best option for a specific network. Our experimental results show that using up to 3,000 examples of random network configurations (i.e., different population sizes as well as variable connectivity), our approach effectively selects the appropriate configuration, providing over 93% accuracy in predicting the suitable format on three different NVIDIA devices.
RESUMEN
Multi-scale network models that simultaneously simulate different measurable signals at different spatial and temporal scales, such as membrane potentials of single neurons, population firing rates, local field potentials, and blood-oxygen-level-dependent (BOLD) signals, are becoming increasingly popular in computational neuroscience. The transformation of the underlying simulated neuronal activity of these models to simulated non-invasive measurements, such as BOLD signals, is particularly relevant. The present work describes the implementation of a BOLD monitor within the neural simulator ANNarchy to allow an on-line computation of simulated BOLD signals from neural network models. An active research topic regarding the simulation of BOLD signals is the coupling of neural processes to cerebral blood flow (CBF) and cerebral metabolic rate of oxygen (CMRO2). The flexibility of ANNarchy allows users to define this coupling with a high degree of freedom and thus, not only allows to relate mesoscopic network models of populations of spiking neurons to experimental BOLD data, but also to investigate different hypotheses regarding the coupling between neural processes, CBF and CMRO2 with these models. In this study, we demonstrate how simulated BOLD signals can be obtained from a network model consisting of multiple spiking neuron populations. We first demonstrate the use of the Balloon model, the predominant model for simulating BOLD signals, as well as the possibility of using novel user-defined models, such as a variant of the Balloon model with separately driven CBF and CMRO2 signals. We emphasize how different hypotheses about the coupling between neural processes, CBF and CMRO2 can be implemented and how these different couplings affect the simulated BOLD signals. With the BOLD monitor presented here, ANNarchy provides a tool for modelers who want to relate their network models to experimental MRI data and for scientists who want to extend their studies of the coupling between neural processes and the BOLD signal by using modeling approaches. This facilitates the investigation and model-based analysis of experimental BOLD data and thus improves multi-scale understanding of neural processes in humans.
RESUMEN
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions.
RESUMEN
Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
RESUMEN
We present a dynamic model of attention based on the Continuum Neural Field Theory that explains attention as being an emergent property of a neural population. This model is experimentally proved to be very robust and able to track one static or moving target in the presence of very strong noise or in the presence of a lot of distractors, even more salient than the target. This attentional property is not restricted to the visual case and can be considered as a generic attentional process of any spatio-temporal continuous input.
Asunto(s)
Atención/fisiología , Modelos Neurológicos , Percepción de Movimiento/fisiología , Neuronas/fisiología , Animales , Área de Dependencia-Independencia , Humanos , Distribución Normal , Estimulación Luminosa/métodosRESUMEN
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
RESUMEN
Neural activity in dopaminergic areas such as the ventral tegmental area is influenced by timing processes, in particular by the temporal expectation of rewards during Pavlovian conditioning. Receipt of a reward at the expected time allows to compute reward-prediction errors which can drive learning in motor or cognitive structures. Reciprocally, dopamine plays an important role in the timing of external events. Several models of the dopaminergic system exist, but the substrate of temporal learning is rather unclear. In this article, we propose a neuro-computational model of the afferent network to the ventral tegmental area, including the lateral hypothalamus, the pedunculopontine nucleus, the amygdala, the ventromedial prefrontal cortex, the ventral basal ganglia (including the nucleus accumbens and the ventral pallidum), as well as the lateral habenula and the rostromedial tegmental nucleus. Based on a plausible connectivity and realistic learning rules, this neuro-computational model reproduces several experimental observations, such as the progressive cancelation of dopaminergic bursts at reward delivery, the appearance of bursts at the onset of reward-predicting cues or the influence of reward magnitude on activity in the amygdala and ventral tegmental area. While associative learning occurs primarily in the amygdala, learning of the temporal relationship between the cue and the associated reward is implemented as a dopamine-modulated coincidence detection mechanism in the nucleus accumbens.
RESUMEN
Cortico-basalganglio-thalamic loops are involved in both cognitive processes and motor control. We present a biologically meaningful computational model of how these loops contribute to the organization of working memory and the development of response behavior. Via reinforcement learning in basal ganglia, the model develops flexible control of working memory within prefrontal loops and achieves selection of appropriate responses based on working memory content and visual stimulation within a motor loop. We show that both working memory control and response selection can evolve within parallel and interacting cortico-basalganglio-thalamic loops by Hebbian and three-factor learning rules. Furthermore, the model gives a coherent explanation for how complex strategies of working memory control and response selection can derive from basic cognitive operations that can be learned via trial and error.
Asunto(s)
Ganglios Basales/fisiología , Corteza Cerebral/fisiología , Simulación por Computador , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Neuronas/fisiología , Tálamo/fisiología , Animales , Condicionamiento Psicológico/fisiología , Humanos , Vías Nerviosas/fisiología , Tiempo de Reacción , Refuerzo en PsicologíaRESUMEN
Visual working memory (WM) tasks involve a network of cortical areas such as inferotemporal, medial temporal and prefrontal cortices. We suggest here to investigate the role of the basal ganglia (BG) in the learning of delayed rewarded tasks through the selective gating of thalamocortical loops. We designed a computational model of the visual loop linking the perirhinal cortex, the BG and the thalamus, biased by sustained representations in prefrontal cortex. This model learns concurrently different delayed rewarded tasks that require to maintain a visual cue and to associate it to itself or to another visual object to obtain reward. The retrieval of visual information is achieved through thalamic stimulation of the perirhinal cortex. The input structure of the BG, the striatum, learns to represent visual information based on its association to reward, while the output structure, the substantia nigra pars reticulata, learns to link striatal representations to the disinhibition of the correct thalamocortical loop. In parallel, a dopaminergic cell learns to associate striatal representations to reward and modulates learning of connections within the BG. The model provides testable predictions about the behavior of several areas during such tasks, while providing a new functional organization of learning within the BG, putting emphasis on the learning of the striatonigral connections as well as the lateral connections within the substantia nigra pars reticulata. It suggests that the learning of visual WM tasks is achieved rapidly in the BG and used as a teacher for feedback connections from prefrontal cortex to posterior cortices.
RESUMEN
The perirhinal cortex is involved not only in object recognition and novelty detection but also in multimodal integration, reward association, and visual working memory. We propose a computational model that focuses on the role of the perirhinal cortex in working memory, particularly with respect to sustained activities and memory retrieval. This model describes how different partial informations are integrated into assemblies of neurons that represent the identity of an object. Through dopaminergic modulation, the resulting clusters can retrieve the global information with recurrent interactions between neurons. Dopamine leads to sustained activities after stimulus disappearance that form the basis of the involvement of the perirhinal cortex in visual working memory processes. The information carried by a cluster can also be retrieved by a partial thalamic or prefrontal stimulation. Thus, we suggest that areas involved in planning and memory coordination encode a pointer to access the detailed information encoded in the associative cortex such as the perirhinal cortex.