Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
PLoS Biol ; 21(7): e3002201, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37459394

RESUMEN

When observing the outcome of a choice, people are sensitive to the choice's context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms-reflecting a different theoretical viewpoint-may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new "intrinsically enhanced" RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond.


Asunto(s)
Refuerzo en Psicología , Recompensa , Humanos , Aprendizaje , Motivación
2.
Nat Rev Neurosci ; 21(10): 576-586, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32873936

RESUMEN

Reinforcement learning (RL) is a framework of particular importance to psychology, neuroscience and machine learning. Interactions between these fields, as promoted through the common hub of RL, has facilitated paradigm shifts that relate multiple levels of analysis in a singular framework (for example, relating dopamine function to a computationally defined RL signal). Recently, more sophisticated RL algorithms have been proposed to better account for human learning, and in particular its oft-documented reliance on two separable systems: a model-based (MB) system and a model-free (MF) system. However, along with many benefits, this dichotomous lens can distort questions, and may contribute to an unnecessarily narrow perspective on learning and decision-making. Here, we outline some of the consequences that come from overconfidently mapping algorithms, such as MB versus MF RL, with putative cognitive processes. We argue that the field is well positioned to move beyond simplistic dichotomies, and we propose a means of refocusing research questions towards the rich and complex components that comprise learning and decision-making.


Asunto(s)
Encéfalo/fisiología , Toma de Decisiones/fisiología , Modelos Neurológicos , Refuerzo en Psicología , Algoritmos , Animales , Dopamina/fisiología , Humanos , Memoria/fisiología , Recompensa
3.
PLoS Comput Biol ; 20(5): e1012119, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38748770

RESUMEN

Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data. Cognitive modeling uses statistical tools to quantitatively relate models to data that often rely on computing/estimating the likelihood of the data under the model. However, this likelihood is computationally intractable for a substantial number of models. These relevant models may embody reasonable theories of cognition, but are often under-explored due to the limited range of tools available to relate them to data. We contribute to filling this gap in a simple way using artificial neural networks (ANNs) to map data directly onto model identity and parameters, bypassing the likelihood estimation. We test our instantiation of an ANN as a cognitive model fitting tool on classes of cognitive models with strong inter-trial dependencies (such as reinforcement learning models), which offer unique challenges to most methods. We show that we can adequately perform both parameter estimation and model identification using our ANN approach, including for models that cannot be fit using traditional likelihood-based methods. We further discuss our work in the context of the ongoing research leveraging simulation-based approaches to parameter estimation and model identification, and how these approaches broaden the class of cognitive models researchers can quantitatively investigate.


Asunto(s)
Cognición , Biología Computacional , Simulación por Computador , Redes Neurales de la Computación , Humanos , Cognición/fisiología , Biología Computacional/métodos , Funciones de Verosimilitud , Algoritmos , Modelos Neurológicos
4.
J Neurosci ; 43(17): 3131-3143, 2023 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-36931706

RESUMEN

Human learning and decision-making are supported by multiple systems operating in parallel. Recent studies isolating the contributions of reinforcement learning (RL) and working memory (WM) have revealed a trade-off between the two. An interactive WM/RL computational model predicts that although high WM load slows behavioral acquisition, it also induces larger prediction errors in the RL system that enhance robustness and retention of learned behaviors. Here, we tested this account by parametrically manipulating WM load during RL in conjunction with EEG in both male and female participants and administered two surprise memory tests. We further leveraged single-trial decoding of EEG signatures of RL and WM to determine whether their interaction predicted robust retention. Consistent with the model, behavioral learning was slower for associations acquired under higher load but showed parametrically improved future retention. This paradoxical result was mirrored by EEG indices of RL, which were strengthened under higher WM loads and predictive of more robust future behavioral retention of learned stimulus-response contingencies. We further tested whether stress alters the ability to shift between the two systems strategically to maximize immediate learning versus retention of information and found that induced stress had only a limited effect on this trade-off. The present results offer a deeper understanding of the cooperative interaction between WM and RL and show that relying on WM can benefit the rapid acquisition of choice behavior during learning but impairs retention.SIGNIFICANCE STATEMENT Successful learning is achieved by the joint contribution of the dopaminergic RL system and WM. The cooperative WM/RL model was productive in improving our understanding of the interplay between the two systems during learning, demonstrating that reliance on RL computations is modulated by WM load. However, the role of WM/RL systems in the retention of learned stimulus-response associations remained unestablished. Our results show that increased neural signatures of learning, indicative of greater RL computation, under high WM load also predicted better stimulus-response retention. This result supports a trade-off between the two systems, where degraded WM increases RL processing, which improves retention. Notably, we show that this cooperative interplay remains largely unaffected by acute stress.


Asunto(s)
Aprendizaje , Memoria a Corto Plazo , Masculino , Humanos , Femenino , Memoria a Corto Plazo/fisiología , Aprendizaje/fisiología , Refuerzo en Psicología , Conducta de Elección , Cognición
5.
Cogn Affect Behav Neurosci ; 23(5): 1346-1364, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37656373

RESUMEN

How does the similarity between stimuli affect our ability to learn appropriate response associations for them? In typical laboratory experiments learning is investigated under somewhat ideal circumstances, where stimuli are easily discriminable. This is not representative of most real-life learning, where overlapping "stimuli" can result in different "rewards" and may be learned simultaneously (e.g., you may learn over repeated interactions that a specific dog is friendly, but that a very similar looking one isn't). With two experiments, we test how humans learn in three stimulus conditions: one "best case" condition in which stimuli have idealized and highly discriminable visual and semantic representations, and two in which stimuli have overlapping representations, making them less discriminable. We find that, unsurprisingly, decreasing stimuli discriminability decreases performance. We develop computational models to test different hypotheses about how reinforcement learning (RL) and working memory (WM) processes are affected by different stimulus conditions. Our results replicate earlier studies demonstrating the importance of both processes to capture behavior. However, our results extend previous studies by demonstrating that RL, and not WM, is affected by stimulus distinctness: people learn slower and have higher across-stimulus value confusion at decision when stimuli are more similar to each other. These results illustrate strong effects of stimulus type on learning and demonstrate the importance of considering parallel contributions of different cognitive processes when studying behavior.


Asunto(s)
Aprendizaje , Refuerzo en Psicología , Humanos , Animales , Perros , Aprendizaje/fisiología , Recompensa , Memoria
6.
Proc Natl Acad Sci U S A ; 117(47): 29381-29389, 2020 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-33229518

RESUMEN

Humans have the fascinating ability to achieve goals in a complex and constantly changing world, still surpassing modern machine-learning algorithms in terms of flexibility and learning speed. It is generally accepted that a crucial factor for this ability is the use of abstract, hierarchical representations, which employ structure in the environment to guide learning and decision making. Nevertheless, how we create and use these hierarchical representations is poorly understood. This study presents evidence that human behavior can be characterized as hierarchical reinforcement learning (RL). We designed an experiment to test specific predictions of hierarchical RL using a series of subtasks in the realm of context-based learning and observed several behavioral markers of hierarchical RL, such as asymmetric switch costs between changes in higher-level versus lower-level features, faster learning in higher-valued compared to lower-valued contexts, and preference for higher-valued compared to lower-valued contexts. We replicated these results across three independent samples. We simulated three models-a classic RL, a hierarchical RL, and a hierarchical Bayesian model-and compared their behavior to human results. While the flat RL model captured some aspects of participants' sensitivity to outcome values, and the hierarchical Bayesian model captured some markers of transfer, only hierarchical RL accounted for all patterns observed in human behavior. This work shows that hierarchical RL, a biologically inspired and computationally simple algorithm, can capture human behavior in complex, hierarchical environments and opens the avenue for future research in this field.


Asunto(s)
Aprendizaje Automático , Modelos Psicológicos , Refuerzo en Psicología , Adolescente , Adulto , Teorema de Bayes , Femenino , Humanos , Curva de Aprendizaje , Masculino , Adulto Joven
7.
J Cogn Neurosci ; : 1-17, 2022 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-36473098

RESUMEN

In reinforcement learning (RL) experiments, participants learn to make rewarding choices in response to different stimuli; RL models use outcomes to estimate stimulus-response values that change incrementally. RL models consider any response type indiscriminately, ranging from more concretely defined motor choices (pressing a key with the index finger), to more general choices that can be executed in a number of ways (selecting dinner at the restaurant). However, does the learning process vary as a function of the choice type? In Experiment 1, we show that it does: Participants were slower and less accurate in learning correct choices of a general format compared with learning more concrete motor actions. Using computational modeling, we show that two mechanisms contribute to this. First, there was evidence of irrelevant credit assignment: The values of motor actions interfered with the values of other choice dimensions, resulting in more incorrect choices when the correct response was not defined by a single motor action; second, information integration for relevant general choices was slower. In Experiment 2, we replicated and further extended the findings from Experiment 1 by showing that slowed learning was attributable to weaker working memory use, rather than slowed RL. In both experiments, we ruled out the explanation that the difference in performance between two condition types was driven by difficulty/different levels of complexity. We conclude that defining a more abstract choice space used by multiple learning systems for credit assignment recruits executive resources, limiting how much such processes then contribute to fast learning.

8.
J Cogn Neurosci ; 34(4): 551-568, 2022 03 05.
Artículo en Inglés | MEDLINE | ID: mdl-34942642

RESUMEN

Reinforcement learning and working memory are two core processes of human cognition and are often considered cognitively, neuroscientifically, and algorithmically distinct. Here, we show that the brain networks that support them actually overlap significantly and that they are less distinct cognitive processes than often assumed. We review literature demonstrating the benefits of considering each process to explain properties of the other and highlight recent work investigating their more complex interactions. We discuss how future research in both computational and cognitive sciences can benefit from one another, suggesting that a key missing piece for artificial agents to learn to behave with more human-like efficiency is taking working memory's role in learning seriously. This review highlights the risks of neglecting the interplay between different processes when studying human behavior (in particular when considering individual differences). We emphasize the importance of investigating these dynamics to build a comprehensive understanding of human cognition.


Asunto(s)
Memoria a Corto Plazo , Refuerzo en Psicología , Encéfalo , Cognición , Humanos , Aprendizaje
9.
Nature ; 600(7889): 387-388, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34789883
10.
Cereb Cortex ; 32(1): 231-247, 2021 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-34231854

RESUMEN

People often learn from the outcomes of their actions, even when these outcomes do not involve material rewards or punishments. How does our brain provide this flexibility? We combined behavior, computational modeling, and functional neuroimaging to probe whether learning from abstract novel outcomes harnesses the same circuitry that supports learning from familiar secondary reinforcers. Behavior and neuroimaging revealed that novel images can act as a substitute for rewards during instrumental learning, producing reliable reward-like signals in dopaminergic circuits. Moreover, we found evidence that prefrontal correlates of executive control may play a role in shaping flexible responses in reward circuits. These results suggest that learning from novel outcomes is supported by an interplay between high-level representations in prefrontal cortex and low-level responses in subcortical reward circuits. This interaction may allow for human reinforcement learning over arbitrarily abstract reward functions.


Asunto(s)
Función Ejecutiva , Objetivos , Humanos , Motivación , Corteza Prefrontal/diagnóstico por imagen , Corteza Prefrontal/fisiología , Refuerzo en Psicología , Recompensa
11.
Proc Natl Acad Sci U S A ; 115(10): 2502-2507, 2018 03 06.
Artículo en Inglés | MEDLINE | ID: mdl-29463751

RESUMEN

Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations.


Asunto(s)
Electroencefalografía , Aprendizaje/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Refuerzo en Psicología , Adolescente , Adulto , Algoritmos , Simulación por Computador , Femenino , Humanos , Masculino , Recompensa , Adulto Joven
12.
J Neurosci ; 39(8): 1471-1483, 2019 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-30578340

RESUMEN

An essential human skill is our capacity to monitor and execute a sequence of tasks in the service of an overarching goal. Such a sequence can be as mundane as making a cup of coffee or as complex as flying a fighter plane. Previously, we showed that, during sequential control, the rostrolateral prefrontal cortex (RLPFC) exhibits activation that ramps steadily through the sequence and is necessary for sequential task execution using fMRI in humans (Desrochers et al., 2015). It remains unknown what computations may underlie this ramping dynamic. Across two independent fMRI experiments, we manipulated three features that were unique to the sequential control task to determine whether and how they modulated ramping activity in the RLPFC: (1) sequence position uncertainty, (2) sequential monitoring without external position cues (i.e., from memory), and (3) sequential monitoring without multilevel decision making (i.e., task execution). We replicated the ramping activation in RLPFC and found it to be remarkably robust regardless of the level of task abstraction or engagement of memory functions. Therefore, these results both replicate and extend previous findings regarding the function of the RLPFC. They suggest that sequential control processes are integral to the dynamics of RLPFC activity. Advancing knowledge of the neural bases of sequential control is crucial for our understanding of the sequential processes that are necessary for daily living.SIGNIFICANCE STATEMENT We perform sequences of tasks every day, but little is known about how they are controlled in the brain. Previously we found that ramping activity in the rostrolateral prefrontal cortex (RLPFC) was necessary to perform a sequence of tasks. We designed two independent fMRI experiments in human participants to determine which features of the previous sequential task potentially engaged ramping in the RLPFC. We found that any demand to monitor a sequence of state transitions consistently elicited ramping in the RLPFC, regardless of the level of the decisions made at each step in the sequence or engagement of memory functions. These results provide a framework for understanding RLPFC function during sequential control, and consequently, daily life.


Asunto(s)
Objetivos , Corteza Prefrontal/fisiología , Desempeño Psicomotor/fisiología , Mapeo Encefálico , Percepción de Color , Femenino , Percepción de Forma , Humanos , Imagen por Resonancia Magnética , Masculino , Memoria/fisiología , Modelos Neurológicos , Modelos Psicológicos , Corteza Prefrontal/diagnóstico por imagen , Tiempo de Reacción/fisiología , Estimulación Magnética Transcraneal , Adulto Joven
13.
Cereb Cortex ; 29(5): 1969-1983, 2019 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29912363

RESUMEN

Why are we so slow in choosing the lesser of 2 evils? We considered whether such slowing relates to uncertainty about the value of these options, which arises from the tendency to avoid them during learning, and whether such slowing relates to frontosubthalamic inhibitory control mechanisms. In total, 49 participants performed a reinforcement-learning task and a stop-signal task while fMRI was recorded. A reinforcement-learning model was used to quantify learning strategies. Individual differences in lose-lose slowing related to information uncertainty due to sampling, and independently, to less efficient response inhibition in the stop-signal task. Neuroimaging analysis revealed an analogous dissociation: subthalamic nucleus (STN) BOLD activity related to variability in stopping latencies, whereas weaker frontosubthalamic connectivity related to slowing and information sampling. Across tasks, fast inhibitors increased STN activity for successfully canceled responses in the stop task, but decreased activity for lose-lose choices. These data support the notion that fronto-STN communication implements a rapid but transient brake on response execution, and that slowing due to decision uncertainty could result from an inefficient release of this "hold your horses" mechanism.


Asunto(s)
Ganglios Basales/fisiología , Conflicto Psicológico , Toma de Decisiones/fisiología , Lóbulo Frontal/fisiología , Inhibición Psicológica , Refuerzo en Psicología , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/fisiología , Desempeño Psicomotor , Tiempo de Reacción , Núcleo Subtalámico/fisiología , Incertidumbre , Adulto Joven
14.
J Neurosci ; 37(16): 4332-4342, 2017 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-28320846

RESUMEN

Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning.SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors.


Asunto(s)
Memoria a Corto Plazo , Recompensa , Adolescente , Adulto , Anticipación Psicológica , Cuerpo Estriado/fisiología , Femenino , Lóbulo Frontal/fisiología , Humanos , Masculino , Tiempo de Reacción
15.
J Cogn Neurosci ; 30(10): 1422-1432, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29346018

RESUMEN

Learning to make rewarding choices in response to stimuli depends on a slow but steady process, reinforcement learning, and a fast and flexible, but capacity-limited process, working memory. Using both systems in parallel, with their contributions weighted based on performance, should allow us to leverage the best of each system: rapid early learning, supplemented by long-term robust acquisition. However, this assumes that using one process does not interfere with the other. We use computational modeling to investigate the interactions between the two processes in a behavioral experiment and show that working memory interferes with reinforcement learning. Previous research showed that neural representations of reward prediction errors, a key marker of reinforcement learning, were blunted when working memory was used for learning. We thus predicted that arbitrating in favor of working memory to learn faster in simple problems would weaken the reinforcement learning process. We tested this by measuring performance in a delayed testing phase where the use of working memory was impossible, and thus participant choices depended on reinforcement learning. Counterintuitively, but confirming our predictions, we observed that associations learned most easily were retained worse than associations learned slower: Using working memory to learn quickly came at the cost of long-term retention. Computational modeling confirmed that this could only be accounted for by working memory interference in reinforcement learning computations. These results further our understanding of how multiple systems contribute in parallel to human learning and may have important applications for education and computational psychiatry.


Asunto(s)
Aprendizaje por Asociación/fisiología , Simulación por Computador , Memoria a Corto Plazo/fisiología , Refuerzo en Psicología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
16.
J Cogn Neurosci ; 30(8): 1061-1065, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28562208

RESUMEN

Sometime in the past two decades, neuroimaging and behavioral research converged on pFC as an important locus of cognitive control and decision-making, and that seems to be the last thing anyone has agreed on since. Every year sees an increase in the number of roles and functions attributed to distinct subregions within pFC, roles that may explain behavior and neural activity in one context but might fail to generalize across the many behaviors in which each region is implicated. Emblematic of this ongoing proliferation of functions is dorsal ACC (dACC). Novel tasks that activate dACC are followed by novel interpretations of dACC function, and each new interpretation adds to the number of functionally specific processes contained within the region. This state of affairs, a recurrent and persistent behavior followed by an illusory and transient relief, can be likened to behavioral pathology. In Journal of Cognitive Neuroscience, 29:10 we collect contributed articles that seek to move the conversation beyond specific functions of subregions of pFC, focusing instead on general roles that support pFC involvement in a wide variety of behaviors and across a variety of experimental paradigms.


Asunto(s)
Toma de Decisiones/fisiología , Giro del Cíngulo/fisiología , Aprendizaje/fisiología , Corteza Prefrontal/fisiología , Humanos , Modelos Neurológicos , Vías Nerviosas/fisiología
17.
J Neurosci ; 36(40): 10314-10322, 2016 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-27707968

RESUMEN

Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are largely unknown. We used near-infrared spectroscopy to record PFC activity in 8-month-old human infants during a simple audiovisual hierarchical-rule-learning task. Behavioral results confirmed that infants adopted hierarchical rule sets to learn and generalize spoken object-label mappings across different speaker contexts. Infants had increased activity over right dorsal lateral PFC when rule sets switched from one trial to the next, a neural marker related to updating rule sets into working memory in the adult literature. Infants' eye blink rate, a possible physiological correlate of striatal dopamine activity, also increased when rule sets switched from one trial to the next. Moreover, the increase in right dorsolateral PFC activity in conjunction with eye blink rate also predicted infants' generalization ability, providing exploratory evidence for frontostriatal involvement during learning. These findings provide evidence that PFC is involved in rudimentary hierarchical rule learning in 8-month-old infants, an ability that was previously thought to emerge later in life in concert with PFC maturation. SIGNIFICANCE STATEMENT: Hierarchical rule learning is a powerful learning mechanism that allows rules to be selected in a context-appropriate fashion and transferred or reused in novel contexts. Data from computational models and adults suggests that this learning mechanism is supported by dopamine-innervated interactions between prefrontal cortex (PFC) and striatum. Here, we provide evidence that PFC also supports hierarchical rule learning during infancy, challenging the current dogma that PFC is an underdeveloped brain system until adolescence. These results add new insights into the neurobiological mechanisms available to support learning and generalization in very early postnatal life, providing evidence that PFC and the frontostriatal circuitry are involved in organizing learning and behavior earlier in life than previously known.


Asunto(s)
Generalización Psicológica/fisiología , Aprendizaje/fisiología , Corteza Prefrontal/fisiología , Parpadeo/fisiología , Química Encefálica , Mapeo Encefálico , Femenino , Lóbulo Frontal/fisiología , Lateralidad Funcional/fisiología , Humanos , Lactante , Masculino , Neostriado/fisiología , Vías Nerviosas/fisiología , Corteza Prefrontal/química , Desempeño Psicomotor , Espectroscopía Infrarroja Corta
18.
J Cogn Neurosci ; 29(10): 1646-1655, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28358657

RESUMEN

Human learning is highly efficient and flexible. A key contributor to this learning flexibility is our ability to generalize new information across contexts that we know require the same behavior and to transfer rules to new contexts we encounter. To do this, we structure the information we learn and represent it hierarchically as abstract, context-dependent rules that constrain lower-level stimulus-action-outcome contingencies. Previous research showed that humans create such structure even when it is not needed, presumably because it usually affords long-term generalization benefits. However, computational models predict that creating structure is costly, with slower learning and slower RTs. We tested this prediction in a new behavioral experiment. Participants learned to select correct actions for four visual patterns, in a setting that either afforded (but did not promote) structure learning or enforced nonhierarchical learning, while controlling for the difficulty of the learning problem. Results replicated our previous finding that healthy young adults create structure even when unneeded and that this structure affords later generalization. Furthermore, they supported our prediction that structure learning incurred a major learning cost and that this cost was specifically tied to the effort in selecting abstract rules, leading to more errors when applying those rules. These findings confirm our theory that humans pay a high short-term cost in learning structure to enable longer-term benefits in learning flexibility.


Asunto(s)
Aprendizaje , Modelos Psicológicos , Análisis de Varianza , Humanos , Redes Neurales de la Computación , Pruebas Psicológicas , Tiempo de Reacción , Percepción Visual
19.
J Neurosci ; 34(13): 4677-85, 2014 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-24672013

RESUMEN

Human cognition is flexible and adaptive, affording the ability to detect and leverage complex structure inherent in the environment and generalize this structure to novel situations. Behavioral studies show that humans impute structure into simple learning problems, even when this tendency affords no behavioral advantage. Here we used electroencephalography to investigate the neural dynamics indicative of such incidental latent structure. Event-related potentials over lateral prefrontal cortex, typically observed for instructed task rules, were stratified according to individual participants' constructed rule sets. Moreover, this individualized latent rule structure could be independently decoded from multielectrode pattern classification. Both neural markers were predictive of participants' ability to subsequently generalize rule structure to new contexts. These EEG dynamics reveal that the human brain spontaneously constructs hierarchically structured representations during learning of simple task rules.


Asunto(s)
Mapeo Encefálico , Potenciales Evocados/fisiología , Aprendizaje/fisiología , Corteza Prefrontal/fisiología , Percepción de Color , Simulación por Computador , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Modelos Biológicos , Reconocimiento Visual de Modelos , Estimulación Luminosa , Tiempo de Reacción/fisiología
20.
J Neurosci ; 34(41): 13747-56, 2014 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-25297101

RESUMEN

Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.


Asunto(s)
Discapacidades para el Aprendizaje/psicología , Memoria a Corto Plazo/fisiología , Psicología del Esquizofrénico , Adulto , Antipsicóticos/administración & dosificación , Antipsicóticos/uso terapéutico , Relación Dosis-Respuesta a Droga , Femenino , Humanos , Curva de Aprendizaje , Discapacidades para el Aprendizaje/etiología , Masculino , Modelos Psicológicos , Estimulación Luminosa , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Refuerzo en Psicología , Esquizofrenia/complicaciones , Esquizofrenia/tratamiento farmacológico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA