Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cell ; 173(4): 894-905.e13, 2018 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-29706545

RESUMEN

Perceptual decisions require the accumulation of sensory information to a response criterion. Most accounts of how the brain performs this process of temporal integration have focused on evolving patterns of spiking activity. We report that subthreshold changes in membrane voltage can represent accumulating evidence before a choice. αß core Kenyon cells (αßc KCs) in the mushroom bodies of fruit flies integrate odor-evoked synaptic inputs to action potential threshold at timescales matching the speed of olfactory discrimination. The forkhead box P transcription factor (FoxP) sets neuronal integration and behavioral decision times by controlling the abundance of the voltage-gated potassium channel Shal (KV4) in αßc KC dendrites. αßc KCs thus tailor, through a particular constellation of biophysical properties, the generic process of synaptic integration to the demands of sequential sampling.


Asunto(s)
Dendritas/metabolismo , Proteínas de Drosophila/metabolismo , Drosophila/fisiología , Potenciales de Acción/efectos de los fármacos , Animales , Bario/farmacología , Conducta Animal/efectos de los fármacos , Encéfalo/metabolismo , Encéfalo/patología , Ciclohexanoles/farmacología , Proteínas de Drosophila/genética , Femenino , Factores de Transcripción Forkhead/genética , Factores de Transcripción Forkhead/metabolismo , Masculino , Neuronas/citología , Neuronas/metabolismo , Técnicas de Placa-Clamp , Receptores Odorantes/metabolismo , Canales de Potasio Shal/genética , Canales de Potasio Shal/metabolismo , Olfato , Sinapsis/metabolismo
2.
PLoS Biol ; 21(6): e3002140, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37262014

RESUMEN

Adapting actions to changing goals and environments is central to intelligent behavior. There is evidence that the basal ganglia play a crucial role in reinforcing or adapting actions depending on their outcome. However, the corresponding electrophysiological correlates in the basal ganglia and the extent to which these causally contribute to action adaptation in humans is unclear. Here, we recorded electrophysiological activity and applied bursts of electrical stimulation to the subthalamic nucleus, a core area of the basal ganglia, in 16 patients with Parkinson's disease (PD) on medication using temporarily externalized deep brain stimulation (DBS) electrodes. Patients as well as 16 age- and gender-matched healthy participants attempted to produce forces as close as possible to a target force to collect a maximum number of points. The target force changed over trials without being explicitly shown on the screen so that participants had to infer target force based on the feedback they received after each movement. Patients and healthy participants were able to adapt their force according to the feedback they received (P < 0.001). At the neural level, decreases in subthalamic beta (13 to 30 Hz) activity reflected poorer outcomes and stronger action adaptation in 2 distinct time windows (Pcluster-corrected < 0.05). Stimulation of the subthalamic nucleus reduced beta activity and led to stronger action adaptation if applied within the time windows when subthalamic activity reflected action outcomes and adaptation (Pcluster-corrected < 0.05). The more the stimulation volume was connected to motor cortex, the stronger was this behavioral effect (Pcorrected = 0.037). These results suggest that dynamic modulation of the subthalamic nucleus and interconnected cortical areas facilitates adaptive behavior.


Asunto(s)
Estimulación Encefálica Profunda , Enfermedad de Parkinson , Núcleo Subtalámico , Humanos , Núcleo Subtalámico/fisiología , Estimulación Encefálica Profunda/métodos , Enfermedad de Parkinson/terapia , Ganglios Basales , Adaptación Psicológica
3.
PLoS Comput Biol ; 20(4): e1011516, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38626219

RESUMEN

When facing an unfamiliar environment, animals need to explore to gain new knowledge about which actions provide reward, but also put the newly acquired knowledge to use as quickly as possible. Optimal reinforcement learning strategies should therefore assess the uncertainties of these action-reward associations and utilise them to inform decision making. We propose a novel model whereby direct and indirect striatal pathways act together to estimate both the mean and variance of reward distributions, and mesolimbic dopaminergic neurons provide transient novelty signals, facilitating effective uncertainty-driven exploration. We utilised electrophysiological recording data to verify our model of the basal ganglia, and we fitted exploration strategies derived from the neural model to data from behavioural experiments. We also compared the performance of directed exploration strategies inspired by our basal ganglia model with other exploration algorithms including classic variants of upper confidence bound (UCB) strategy in simulation. The exploration strategies inspired by the basal ganglia model can achieve overall superior performance in simulation, and we found qualitatively similar results in fitting model to behavioural data compared with the fitting of more idealised normative models with less implementation level detail. Overall, our results suggest that transient dopamine levels in the basal ganglia that encode novelty could contribute to an uncertainty representation which efficiently drives exploration in reinforcement learning.


Asunto(s)
Ganglios Basales , Dopamina , Modelos Neurológicos , Recompensa , Dopamina/metabolismo , Dopamina/fisiología , Incertidumbre , Animales , Ganglios Basales/fisiología , Conducta Exploratoria/fisiología , Refuerzo en Psicología , Neuronas Dopaminérgicas/fisiología , Biología Computacional , Simulación por Computador , Masculino , Algoritmos , Toma de Decisiones/fisiología , Conducta Animal/fisiología , Ratas
4.
PLoS Comput Biol ; 20(4): e1011183, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38557984

RESUMEN

One of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that temporal predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, temporal predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, when trained with natural dynamic inputs, we found that temporal predictive coding can produce Gabor-like, motion-sensitive receptive fields resembling those observed in real neurons in visual areas. In addition, we demonstrate how the model can be effectively generalized to nonlinear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.


Asunto(s)
Encéfalo , Modelos Neurológicos , Encéfalo/fisiología , Aprendizaje , Neuronas/fisiología
5.
PLoS Comput Biol ; 19(4): e1010719, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37058541

RESUMEN

The computational principles adopted by the hippocampus in associative memory (AM) tasks have been one of the most studied topics in computational and theoretical neuroscience. Recent theories suggested that AM and the predictive activities of the hippocampus could be described within a unitary account, and that predictive coding underlies the computations supporting AM in the hippocampus. Following this theory, a computational model based on classical hierarchical predictive networks was proposed and was shown to perform well in various AM tasks. However, this fully hierarchical model did not incorporate recurrent connections, an architectural component of the CA3 region of the hippocampus that is crucial for AM. This makes the structure of the model inconsistent with the known connectivity of CA3 and classical recurrent models such as Hopfield Networks, which learn the covariance of inputs through their recurrent connections to perform AM. Earlier PC models that learn the covariance information of inputs explicitly via recurrent connections seem to be a solution to these issues. Here, we show that although these models can perform AM, they do it in an implausible and numerically unstable way. Instead, we propose alternatives to these earlier covariance-learning predictive coding networks, which learn the covariance information implicitly and plausibly, and can use dendritic structures to encode prediction errors. We show analytically that our proposed models are perfectly equivalent to the earlier predictive coding model learning covariance explicitly, and encounter no numerical issues when performing AM tasks in practice. We further show that our models can be combined with hierarchical predictive coding networks to model the hippocampo-neocortical interactions. Our models provide a biologically plausible approach to modelling the hippocampal network, pointing to a potential computational mechanism during hippocampal memory formation and recall, which employs both predictive coding and covariance learning based on the recurrent network structure of the hippocampus.


Asunto(s)
Hipocampo , Aprendizaje , Recuerdo Mental , Condicionamiento Clásico , Modelos Neurológicos
6.
J Neurosci ; 42(23): 4681-4692, 2022 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-35501153

RESUMEN

Making accurate decisions often involves the integration of current and past evidence. Here, we examine the neural correlates of conflict and evidence integration during sequential decision-making. Female and male human patients implanted with deep-brain stimulation (DBS) electrodes and age-matched and gender-matched healthy controls performed an expanded judgment task, in which they were free to choose how many cues to sample. Behaviorally, we found that while patients sampled numerically more cues, they were less able to integrate evidence and showed suboptimal performance. Using recordings of magnetoencephalography (MEG) and local field potentials (LFPs; in patients) in the subthalamic nucleus (STN), we found that ß oscillations signaled conflict between cues within a sequence. Following cues that differed from previous cues, ß power in the STN and cortex first decreased and then increased. Importantly, the conflict signal in the STN outlasted the cortical one, carrying over to the next cue in the sequence. Furthermore, after a conflict, there was an increase in coherence between the dorsal premotor cortex and STN in the ß band. These results extend our understanding of cortico-subcortical dynamics of conflict processing, and do so in a context where evidence must be accumulated in discrete steps, much like in real life. Thus, the present work leads to a more nuanced picture of conflict monitoring systems in the brain and potential changes because of disease.SIGNIFICANCE STATEMENT Decision-making often involves the integration of multiple pieces of information over time to make accurate predictions. We simultaneously recorded whole-head magnetoencephalography (MEG) and local field potentials (LFPs) from the human subthalamic nucleus (STN) in a novel task which required integrating sequentially presented pieces of evidence. Our key finding is prolonged ß oscillations in the STN, with a concurrent increase in communication with frontal cortex, when presented with conflicting information. These neural effects reflect the behavioral profile of reduced tendency to respond after conflict, as well as relate to suboptimal cue integration in patients, which may be directly linked to clinically reported side-effects of deep-brain stimulation (DBS) such as impaired decision-making and impulsivity.


Asunto(s)
Estimulación Encefálica Profunda , Corteza Motora , Enfermedad de Parkinson , Núcleo Subtalámico , Ritmo beta , Estimulación Encefálica Profunda/métodos , Femenino , Humanos , Magnetoencefalografía , Masculino , Corteza Motora/fisiología , Enfermedad de Parkinson/terapia , Núcleo Subtalámico/fisiología
7.
PLoS Comput Biol ; 18(5): e1009816, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35622863

RESUMEN

To accurately predict rewards associated with states or actions, the variability of observations has to be taken into account. In particular, when the observations are noisy, the individual rewards should have less influence on tracking of average reward, and the estimate of the mean reward should be updated to a smaller extent after each observation. However, it is not known how the magnitude of the observation noise might be tracked and used to control prediction updates in the brain reward system. Here, we introduce a new model that uses simple, tractable learning rules that track the mean and standard deviation of reward, and leverages prediction errors scaled by uncertainty as the central feedback signal. We show that the new model has an advantage over conventional reinforcement learning models in a value tracking task, and approaches a theoretic limit of performance provided by the Kalman filter. Further, we propose a possible biological implementation of the model in the basal ganglia circuit. In the proposed network, dopaminergic neurons encode reward prediction errors scaled by standard deviation of rewards. We show that such scaling may arise if the striatal neurons learn the standard deviation of rewards and modulate the activity of dopaminergic neurons. The model is consistent with experimental findings concerning dopamine prediction error scaling relative to reward magnitude, and with many features of striatal plasticity. Our results span across the levels of implementation, algorithm, and computation, and might have important implications for understanding the dopaminergic prediction error signal and its relation to adaptive and effective learning.


Asunto(s)
Ganglios Basales , Aprendizaje , Ganglios Basales/fisiología , Dopamina/fisiología , Aprendizaje/fisiología , Refuerzo en Psicología , Recompensa , Incertidumbre
8.
Neural Comput ; 34(2): 307-337, 2022 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-34758486

RESUMEN

Reinforcement learning involves updating estimates of the value of states and actions on the basis of experience. Previous work has shown that in humans, reinforcement learning exhibits a confirmatory bias: when the value of a chosen option is being updated, estimates are revised more radically following positive than negative reward prediction errors, but the converse is observed when updating the unchosen option value estimate. Here, we simulate performance on a multi-arm bandit task to examine the consequences of a confirmatory bias for reward harvesting. We report a paradoxical finding: that confirmatory biases allow the agent to maximize reward relative to an unbiased updating rule. This principle holds over a wide range of experimental settings and is most influential when decisions are corrupted by noise. We show that this occurs because on average, confirmatory biases lead to overestimating the value of more valuable bandits and underestimating the value of less valuable bandits, rendering decisions overall more robust in the face of noise. Our results show how apparently suboptimal learning rules can in fact be reward maximizing if decisions are made with finite computational precision.


Asunto(s)
Aprendizaje , Refuerzo en Psicología , Sesgo , Toma de Decisiones , Humanos , Recompensa
9.
J Neurooncol ; 160(3): 753-761, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36449256

RESUMEN

PURPOSE: Despite the improvement in treatment and prognosis of primary central nervous system lymphoma (PCNSL) over the last decades, the 5-year survival rate is approximately 30%; thus, new therapeutic approaches are needed to improve patient survival. The study's aim was to evaluate the role of surgical resection of PCNSL. METHODS: Primary outcomes were the overall survival (OS) and progression-free survival (PFS) of patients with PCNSL who underwent surgical resection versus biopsy alone. The meta-analysis was conducted to calculate pooled hazard ratios (HRs) under a random-effects model for the time-to-event variables. The odds ratios (ORs) were calculated for binary, secondary outcome parameters. RESULTS: Seven studies (n = 1046) were included. We found that surgical resection was associated with significantly better OS (HR 0.63 [95% CI 0.51-0.77]) when compared with biopsy. PFS was also significantly improved (HR 0.64 [95% CI 0.49-0.85]) in patients who underwent resection compared with those who underwent biopsy. The heterogeneity for OS and PFS was low (I2 = 7% and 24%, respectively). We also found that patients who underwent biopsy more often had multiple (OR 0.38 [95% CI 0.19-0.79]) or deep-seated (OR 0.20 [95% CI 0.12-0.34]) lesions compared with those who underwent surgical resection. There were no significant differences in chemotherapy or radiotherapy use or the occurrence of postoperative complications between the two groups. CONCLUSION: In selected patients, surgical resection of PCNSL is associated with significantly better overall survival and progression-free survival compared with biopsy alone.


Asunto(s)
Neoplasias del Sistema Nervioso Central , Linfoma , Humanos , Supervivencia sin Progresión , Biopsia , Terapia Combinada , Linfoma/cirugía , Linfoma/tratamiento farmacológico , Sistema Nervioso Central
10.
PLoS Comput Biol ; 17(7): e1009213, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34270552

RESUMEN

Reward prediction errors (RPEs) and risk preferences have two things in common: both can shape decision making behavior, and both are commonly associated with dopamine. RPEs drive value learning and are thought to be represented in the phasic release of striatal dopamine. Risk preferences bias choices towards or away from uncertainty; they can be manipulated with drugs that target the dopaminergic system. Based on the common neural substrate, we hypothesize that RPEs and risk preferences are linked on the level of behavior as well. Here, we develop this hypothesis theoretically and test it empirically. First, we apply a recent theory of learning in the basal ganglia to predict how RPEs influence risk preferences. We find that positive RPEs should cause increased risk-seeking, while negative RPEs should cause risk-aversion. We then test our behavioral predictions using a novel bandit task in which value and risk vary independently across options. Critically, conditions are included where options vary in risk but are matched for value. We find that our prediction was correct: participants become more risk-seeking if choices are preceded by positive RPEs, and more risk-averse if choices are preceded by negative RPEs. These findings cannot be explained by other known effects, such as nonlinear utility curves or dynamic learning rates.


Asunto(s)
Modelos Psicológicos , Recompensa , Asunción de Riesgos , Adolescente , Adulto , Aprendizaje por Asociación/fisiología , Ganglios Basales/fisiología , Biología Computacional , Simulación por Computador , Cuerpo Estriado/fisiología , Toma de Decisiones , Dopamina/fisiología , Economía del Comportamiento , Femenino , Humanos , Aprendizaje/fisiología , Funciones de Verosimilitud , Masculino , Memoria/fisiología , Refuerzo en Psicología , Incertidumbre , Adulto Joven
11.
PLoS Comput Biol ; 17(8): e1009281, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34358224

RESUMEN

Deep brain stimulation (DBS) is a well-established treatment option for a variety of neurological disorders, including Parkinson's disease and essential tremor. The symptoms of these disorders are known to be associated with pathological synchronous neural activity in the basal ganglia and thalamus. It is hypothesised that DBS acts to desynchronise this activity, leading to an overall reduction in symptoms. Electrodes with multiple independently controllable contacts are a recent development in DBS technology which have the potential to target one or more pathological regions with greater precision, reducing side effects and potentially increasing both the efficacy and efficiency of the treatment. The increased complexity of these systems, however, motivates the need to understand the effects of DBS when applied to multiple regions or neural populations within the brain. On the basis of a theoretical model, our paper addresses the question of how to best apply DBS to multiple neural populations to maximally desynchronise brain activity. Central to this are analytical expressions, which we derive, that predict how the symptom severity should change when stimulation is applied. Using these expressions, we construct a closed-loop DBS strategy describing how stimulation should be delivered to individual contacts using the phases and amplitudes of feedback signals. We simulate our method and compare it against two others found in the literature: coordinated reset and phase-locked stimulation. We also investigate the conditions for which our strategy is expected to yield the most benefit.


Asunto(s)
Estimulación Encefálica Profunda/métodos , Temblor Esencial/terapia , Enfermedad de Parkinson/terapia , Temblor Esencial/fisiopatología , Humanos , Modelos Teóricos , Enfermedad de Parkinson/fisiopatología
12.
PLoS Comput Biol ; 17(7): e1009116, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34233347

RESUMEN

Parkinson's disease motor symptoms are associated with an increase in subthalamic nucleus beta band oscillatory power. However, these oscillations are phasic, and there is a growing body of evidence suggesting that beta burst duration may be of critical importance to motor symptoms. This makes insights into the dynamics of beta bursting generation valuable, in particular to refine closed-loop deep brain stimulation in Parkinson's disease. In this study, we ask the question "Can average burst duration reveal how dynamics change between the ON and OFF medication states?". Our analysis of local field potentials from the subthalamic nucleus demonstrates using linear surrogates that the system generating beta oscillations is more likely to act in a non-linear regime OFF medication and that the change in a non-linearity measure is correlated with motor impairment. In addition, we pinpoint the simplest dynamical changes that could be responsible for changes in the temporal patterning of beta oscillations between medication states by fitting to data biologically inspired models, and simpler beta envelope models. Finally, we show that the non-linearity can be directly extracted from average burst duration profiles under the assumption of constant noise in envelope models. This reveals that average burst duration profiles provide a window into burst dynamics, which may underlie the success of burst duration as a biomarker. In summary, we demonstrate a relationship between average burst duration profiles, dynamics of the system generating beta oscillations, and motor impairment, which puts us in a better position to understand the pathology and improve therapies such as deep brain stimulation.


Asunto(s)
Ritmo beta/fisiología , Enfermedad de Parkinson/fisiopatología , Núcleo Subtalámico/fisiología , Núcleo Subtalámico/fisiopatología , Antiparkinsonianos/farmacología , Ritmo beta/efectos de los fármacos , Biología Computacional , Humanos , Modelos Neurológicos , Núcleo Subtalámico/efectos de los fármacos
13.
Cogn Affect Behav Neurosci ; 21(6): 1196-1206, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34652602

RESUMEN

Human decisions can be reflexive or planned, being governed respectively by model-free and model-based learning systems. These two systems might differ in their responsiveness to our needs. Hunger drives us to specifically seek food rewards, but here we ask whether it might have more general effects on these two decision systems. On one hand, the model-based system is often considered flexible and context-sensitive, and might therefore be modulated by metabolic needs. On the other hand, the model-free system's primitive reinforcement mechanisms may have closer ties to biological drives. Here, we tested participants on a well-established two-stage sequential decision-making task that dissociates the contribution of model-based and model-free control. Hunger enhanced overall performance by increasing model-free control, without affecting model-based control. These results demonstrate a generalized effect of hunger on decision-making that enhances reliance on primitive reinforcement learning, which in some situations translates into adaptive benefits.


Asunto(s)
Toma de Decisiones , Hambre , Humanos , Aprendizaje , Refuerzo en Psicología , Recompensa
14.
PLoS Comput Biol ; 16(5): e1007465, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32453725

RESUMEN

Decision making relies on adequately evaluating the consequences of actions on the basis of past experience and the current physiological state. A key role in this process is played by the basal ganglia, where neural activity and plasticity are modulated by dopaminergic input from the midbrain. Internal physiological factors, such as hunger, scale signals encoded by dopaminergic neurons and thus they alter the motivation for taking actions and learning. However, to our knowledge, no formal mathematical formulation exists for how a physiological state affects learning and action selection in the basal ganglia. We developed a framework for modelling the effect of motivation on choice and learning. The framework defines the motivation to obtain a particular resource as the difference between the desired and the current level of this resource, and proposes how the utility of reinforcements depends on the motivation. To account for dopaminergic activity previously recorded in different physiological states, the paper argues that the prediction error encoded in the dopaminergic activity needs to be redefined as the difference between utility and expected utility, which depends on both the objective reinforcement and the motivation. We also demonstrate a possible mechanism by which the evaluation and learning of utility of actions can be implemented in the basal ganglia network. The presented theory brings together models of learning in the basal ganglia with the incentive salience theory in a single simple framework, and it provides a mechanistic insight into how decision processes and learning in the basal ganglia are modulated by the motivation. Moreover, this theory is also consistent with data on neural underpinnings of overeating and obesity, and makes further experimental predictions.


Asunto(s)
Ganglios Basales/fisiología , Conducta de Elección , Aprendizaje , Modelos Neurológicos , Motivación , Animales , Conducta Animal , Simulación por Computador , Dopamina/fisiología , Neuronas Dopaminérgicas/fisiología , Humanos , Mesencéfalo/fisiología , Ratones , Vías Nerviosas/fisiología , Refuerzo en Psicología , Recompensa
15.
PLoS Comput Biol ; 15(2): e1006285, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30818357

RESUMEN

A set of sub-cortical nuclei called basal ganglia is critical for learning the values of actions. The basal ganglia include two pathways, which have been associated with approach and avoid behavior respectively and are differentially modulated by dopamine projections from the midbrain. Inspired by the influential opponent actor learning model, we demonstrate that, under certain circumstances, these pathways may represent learned estimates of the positive and negative consequences (payoffs and costs) of individual actions. In the model, the level of dopamine activity encodes the motivational state and controls to what extent payoffs and costs enter the overall evaluation of actions. We show that a set of previously proposed plasticity rules is suitable to extract payoffs and costs from a prediction error signal if they occur at different moments in time. For those plasticity rules, successful learning requires differential effects of positive and negative outcome prediction errors on the two pathways and a weak decay of synaptic weights over trials. We also confirm through simulations that the model reproduces drug-induced changes of willingness to work, as observed in classical experiments with the D2-antagonist haloperidol.


Asunto(s)
Reacción de Prevención/fisiología , Conducta de Elección/fisiología , Biología Computacional/métodos , Animales , Ganglios Basales/fisiología , Simulación por Computador , Dopamina/metabolismo , Humanos , Aprendizaje , Modelos Neurológicos , Motivación , Vías Nerviosas , Refuerzo en Psicología , Recompensa
16.
PLoS Comput Biol ; 15(8): e1006575, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31393880

RESUMEN

Deep brain stimulation (DBS) is known to be an effective treatment for a variety of neurological disorders, including Parkinson's disease and essential tremor (ET). At present, it involves administering a train of pulses with constant frequency via electrodes implanted into the brain. New 'closed-loop' approaches involve delivering stimulation according to the ongoing symptoms or brain activity and have the potential to provide improvements in terms of efficiency, efficacy and reduction of side effects. The success of closed-loop DBS depends on being able to devise a stimulation strategy that minimizes oscillations in neural activity associated with symptoms of motor disorders. A useful stepping stone towards this is to construct a mathematical model, which can describe how the brain oscillations should change when stimulation is applied at a particular state of the system. Our work focuses on the use of coupled oscillators to represent neurons in areas generating pathological oscillations. Using a reduced form of the Kuramoto model, we analyse how a patient should respond to stimulation when neural oscillations have a given phase and amplitude, provided a number of conditions are satisfied. For such patients, we predict that the best stimulation strategy should be phase specific but also that stimulation should have a greater effect if applied when the amplitude of brain oscillations is lower. We compare this surprising prediction with data obtained from ET patients. In light of our predictions, we also propose a new hybrid strategy which effectively combines two of the closed-loop strategies found in the literature, namely phase-locked and adaptive DBS.


Asunto(s)
Estimulación Encefálica Profunda , Modelos Neurológicos , Encéfalo/fisiopatología , Biología Computacional , Estimulación Encefálica Profunda/métodos , Estimulación Encefálica Profunda/estadística & datos numéricos , Temblor Esencial/fisiopatología , Temblor Esencial/terapia , Humanos , Neuronas/fisiología , Enfermedad de Parkinson/fisiopatología , Enfermedad de Parkinson/terapia
17.
J Cogn Neurosci ; 30(6): 876-884, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29488846

RESUMEN

During a decision process, the evidence supporting alternative options is integrated over time, and the choice is made when the accumulated evidence for one of the options reaches a decision threshold. Humans and animals have an ability to control the decision threshold, that is, the amount of evidence that needs to be gathered to commit to a choice, and it has been proposed that the subthalamic nucleus (STN) is important for this control. Recent behavioral and neurophysiological data suggest that, in some circumstances, the decision threshold decreases with time during choice trials, allowing overcoming of indecision during difficult choices. Here we asked whether this within-trial decrease of the decision threshold is mediated by the STN and if it is affected by disrupting information processing in the STN through deep brain stimulation (DBS). We assessed 13 patients with Parkinson disease receiving bilateral STN DBS six or more months after the surgery, 11 age-matched controls, and 12 young healthy controls. All participants completed a series of decision trials, in which the evidence was presented in discrete time points, which allowed more direct estimation of the decision threshold. The participants differed widely in the slope of their decision threshold, ranging from constant threshold within a trial to steeply decreasing. However, the slope of the decision threshold did not depend on whether STN DBS was switched on or off and did not differ between the patients and controls. Furthermore, there was no difference in accuracy and RT between the patients in the on and off stimulation conditions and healthy controls. Previous studies that have reported modulation of the decision threshold by STN DBS or unilateral subthalamotomy in Parkinson disease have involved either fast decision-making under conflict or time pressure or in anticipation of high reward. Our findings suggest that, in the absence of reward, decision conflict, or time pressure for decision-making, the STN does not play a critical role in modulating the within-trial decrease of decision thresholds during the choice process.


Asunto(s)
Toma de Decisiones/fisiología , Núcleo Subtalámico/fisiología , Adulto , Anciano , Conflicto Psicológico , Estimulación Encefálica Profunda , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Neurológicos , Pruebas Neuropsicológicas , Enfermedad de Parkinson/psicología , Enfermedad de Parkinson/terapia , Recompensa
18.
Neural Comput ; 29(2): 368-393, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27870610

RESUMEN

Much experimental evidence suggests that during decision making, neural circuits accumulate evidence supporting alternative options. A computational model well describing this accumulation for choices between two options assumes that the brain integrates the log ratios of the likelihoods of the sensory inputs given the two options. Several models have been proposed for how neural circuits can learn these log-likelihood ratios from experience, but all of these models introduced novel and specially dedicated synaptic plasticity rules. Here we show that for a certain wide class of tasks, the log-likelihood ratios are approximately linearly proportional to the expected rewards for selecting actions. Therefore, a simple model based on standard reinforcement learning rules is able to estimate the log-likelihood ratios from experience and on each trial accumulate the log-likelihood ratios associated with presented stimuli while selecting an action. The simulations of the model replicate experimental data on both behavior and neural activity in tasks requiring accumulation of probabilistic cues. Our results suggest that there is no need for the brain to support dedicated plasticity rules, as the standard mechanisms proposed to describe reinforcement learning can enable the neural circuits to perform efficient probabilistic inference.

19.
Neural Comput ; 29(5): 1229-1262, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-28333583

RESUMEN

To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.


Asunto(s)
Algoritmos , Corteza Cerebral/citología , Retroalimentación Fisiológica/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Humanos , Aprendizaje , Redes Neurales de la Computación
20.
PLoS Comput Biol ; 12(9): e1005062, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27589489

RESUMEN

Learning the reliability of different sources of rewards is critical for making optimal choices. However, despite the existence of detailed theory describing how the expected reward is learned in the basal ganglia, it is not known how reward uncertainty is estimated in these circuits. This paper presents a class of models that encode both the mean reward and the spread of the rewards, the former in the difference between the synaptic weights of D1 and D2 neurons, and the latter in their sum. In the models, the tendency to seek (or avoid) options with variable reward can be controlled by increasing (or decreasing) the tonic level of dopamine. The models are consistent with the physiology of and synaptic plasticity in the basal ganglia, they explain the effects of dopaminergic manipulations on choices involving risks, and they make multiple experimental predictions.


Asunto(s)
Ganglios Basales/fisiología , Conducta de Elección/fisiología , Aprendizaje/fisiología , Biología Computacional , Humanos , Modelos Neurológicos , Recompensa , Incertidumbre
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA