Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 106
Filter
1.
Addict Neurosci ; 102024 Mar.
Article in English | MEDLINE | ID: mdl-38524664

ABSTRACT

Computational models of addiction often rely on a model-free reinforcement learning (RL) formulation, owing to the close associations between model-free RL, habitual behavior and the dopaminergic system. However, such formulations typically do not capture key recurrent features of addiction phenomena such as craving and relapse. Moreover, they cannot account for goal-directed aspects of addiction that necessitate contrasting, model-based formulations. Here we synthesize a growing body of evidence and propose that a latent-cause framework can help unify our understanding of several recurrent phenomena in addiction, by viewing them as the inferred return of previous, persistent "latent causes". We demonstrate that applying this framework to Pavlovian and instrumental settings can help account for defining features of craving and relapse such as outcome-specificity, generalization, and cyclical dynamics. Finally, we argue that this framework can bridge model-free and model-based formulations, and account for individual variability in phenomenology by accommodating the memories, beliefs, and goals of those living with addiction, motivating a centering of the individual, subjective experience of addiction and recovery.

2.
PLoS Comput Biol ; 19(12): e1011707, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38127874

ABSTRACT

Positive and negative affective states are respectively associated with optimistic and pessimistic expectations regarding future reward. One mechanism that might underlie these affect-related expectation biases is attention to positive- versus negative-valence features (e.g., attending to the positive reviews of a restaurant versus its expensive price). Here we tested the effects of experimentally induced positive and negative affect on feature-based attention in 120 participants completing a compound-generalization task with eye-tracking. We found that participants' reward expectations for novel compound stimuli were modulated in an affect-congruent way: positive affect induction increased reward expectations for compounds, whereas negative affect induction decreased reward expectations. Computational modelling and eye-tracking analyses each revealed that these effects were driven by affect-congruent changes in participants' allocation of attention to high- versus low-value features of compounds. These results provide mechanistic insight into a process by which affect produces biases in generalized reward expectations.


Subject(s)
Motivation , Pessimism , Humans , Emotions , Generalization, Psychological , Reward
3.
Trends Cogn Sci ; 27(9): 867-882, 2023 09.
Article in English | MEDLINE | ID: mdl-37479601

ABSTRACT

Events associated with aversive or rewarding outcomes are prioritized in memory. This memory boost is commonly attributed to the elicited affective response, closely linked to noradrenergic and dopaminergic modulation of hippocampal plasticity. Herein we review and compare this 'affect' mechanism to an additional, recently discovered, 'prediction' mechanism whereby memories are strengthened by the extent to which outcomes deviate from expectations, that is, by prediction errors (PEs). The mnemonic impact of PEs is separate from the affective outcome itself and has a distinct neural signature. While both routes enhance memory, these mechanisms are linked to different - and sometimes opposing - predictions for memory integration. We discuss new findings that highlight mechanisms by which emotional events strengthen, integrate, and segment memory.


Subject(s)
Emotions , Memory , Humans , Memory/physiology , Reward , Hippocampus/physiology , Affect
4.
Nat Hum Behav ; 7(10): 1667-1681, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37414886

ABSTRACT

Although online samples have many advantages for psychiatric research, some potential pitfalls of this approach are not widely understood. Here we detail circumstances in which spurious correlations may arise between task behaviour and symptom scores. The problem arises because many psychiatric symptom surveys have asymmetric score distributions in the general population, meaning that careless responders on these surveys will show apparently elevated symptom levels. If these participants are similarly careless in their task performance, this may result in a spurious association between symptom scores and task behaviour. We demonstrate this pattern of results in two samples of participants recruited online (total N = 779) who performed one of two common cognitive tasks. False-positive rates for these spurious correlations increase with sample size, contrary to common assumptions. Excluding participants flagged for careless responding on surveys abolished the spurious correlations, but exclusion based on task performance alone was less effective.

5.
Article in English | MEDLINE | ID: mdl-36842498

ABSTRACT

Cognitive tasks are capable of providing researchers with crucial insights into the relationship between cognitive processing and psychiatric phenomena. However, many recent studies have found that task measures exhibit poor reliability, which hampers their usefulness for individual differences research. Here, we provide a narrative review of approaches to improve the reliability of cognitive task measures. Specifically, we introduce a taxonomy of experiment design and analysis strategies for improving task reliability. Where appropriate, we highlight studies that are exemplary for improving the reliability of specific task measures. We hope that this article can serve as a helpful guide for experimenters who wish to design a new task, or improve an existing one, to achieve sufficient reliability for use in individual differences research.


Subject(s)
Cognition , Humans , Reproducibility of Results , Individuality
6.
Behav Res Methods ; 55(1): 58-76, 2023 01.
Article in English | MEDLINE | ID: mdl-35262897

ABSTRACT

In the last few decades, the field of neuroscience has witnessed major technological advances that have allowed researchers to measure and control neural activity with great detail. Yet, behavioral experiments in humans remain an essential approach to investigate the mysteries of the mind. Their relatively modest technological and economic requisites make behavioral research an attractive and accessible experimental avenue for neuroscientists with very diverse backgrounds. However, like any experimental enterprise, it has its own inherent challenges that may pose practical hurdles, especially to less experienced behavioral researchers. Here, we aim at providing a practical guide for a steady walk through the workflow of a typical behavioral experiment with human subjects. This primer concerns the design of an experimental protocol, research ethics, and subject care, as well as best practices for data collection, analysis, and sharing. The goal is to provide clear instructions for both beginners and experienced researchers from diverse backgrounds in planning behavioral experiments.


Subject(s)
Ethics, Research , Research Personnel , Humans , Data Collection
7.
PLoS Comput Biol ; 18(11): e1010699, 2022 11.
Article in English | MEDLINE | ID: mdl-36417419

ABSTRACT

Realistic and complex decision tasks often allow for many possible solutions. How do we find the correct one? Introspection suggests a process of trying out solutions one after the other until success. However, such methodical serial testing may be too slow, especially in environments with noisy feedback. Alternatively, the underlying learning process may involve implicit reinforcement learning that learns about many possibilities in parallel. Here we designed a multi-dimensional probabilistic active-learning task tailored to study how people learn to solve such complex problems. Participants configured three-dimensional stimuli by selecting features for each dimension and received probabilistic reward feedback. We manipulated task complexity by changing how many feature dimensions were relevant to maximizing reward, as well as whether this information was provided to the participants. To investigate how participants learn the task, we examined models of serial hypothesis testing, feature-based reinforcement learning, and combinations of the two strategies. Model comparison revealed evidence for hypothesis testing that relies on reinforcement-learning when selecting what hypothesis to test. The extent to which participants engaged in hypothesis testing depended on the instructed task complexity: people tended to serially test hypotheses when instructed that there were fewer relevant dimensions, and relied more on gradual and parallel learning of feature values when the task was more complex. This demonstrates a strategic use of task information to balance the costs and benefits of the two methods of learning.


Subject(s)
Learning , Reward , Humans , Reinforcement, Psychology
8.
Trends Cogn Sci ; 26(12): 1051-1053, 2022 12.
Article in English | MEDLINE | ID: mdl-36335012

ABSTRACT

How do biological systems learn continuously throughout their lifespans, adapting to change while retaining old knowledge, and how can these principles be applied to artificial learning systems? In this Forum article we outline challenges and strategies of 'lifelong learning' in biological and artificial systems, and argue that a collaborative study of each system's failure modes can benefit both.


Subject(s)
Cognitive Science , Learning , Humans
9.
Clin Psychol Sci ; 10(4): 714-733, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35935262

ABSTRACT

How does rumination affect reinforcement learning-the ubiquitous process by which we adjust behavior after error in order to behave more effectively in the future? In a within-subject design (n=49), we tested whether experimentally manipulated rumination disrupts reinforcement learning in a multidimensional learning task previously shown to rely on selective attention. Rumination impaired performance, yet unexpectedly this impairment could not be attributed to decreased attentional breadth (quantified using a "decay" parameter in a computational model). Instead, trait rumination (between subjects) was associated with higher decay rates (implying narrower attention), yet not with impaired performance. Our task-performance results accord with the possibility that state rumination promotes stress-generating behavior in part by disrupting reinforcement learning. The trait-rumination finding accords with the predictions of a prominent model of trait rumination (the attentional-scope model). More work is needed to understand the specific mechanisms by which state rumination disrupts reinforcement learning.

10.
Cogn Emot ; 36(7): 1343-1360, 2022 11.
Article in English | MEDLINE | ID: mdl-35929878

ABSTRACT

Across species, animals have an intrinsic drive to approach appetitive stimuli and to withdraw from aversive stimuli. In affective science, influential theories of emotion link positive affect with strengthened behavioural approach and negative affect with avoidance. Based on these theories, we predicted that individuals' positive and negative affect levels should particularly influence their behaviour when innate Pavlovian approach/avoidance tendencies conflict with learned instrumental behaviours. Here, across two experiments - exploratory Experiment 1 (N = 91) and a preregistered confirmatory Experiment 2 (N = 335) - we assessed how induced positive and negative affect influenced Pavlovian-instrumental interactions in a reward/punishment Go/No-Go task. Contrary to our hypotheses, we found no evidence for a main effect of positive/negative affect on either approach/avoidance behaviour or Pavlovian-instrumental interactions. However, we did find evidence that the effects of induced affect on behaviour were moderated by individual differences in self-reported behavioural inhibition and gender. Exploratory computational modelling analyses explained these demographic moderating effects as arising from positive correlations between demographic factors and individual differences in the strength of Pavlovian-instrumental interactions. These findings serve to sharpen our understanding of the effects of positive and negative affect on instrumental behaviour.


Subject(s)
Emotions , Learning , Animals , Humans , Learning/physiology , Reward , Inhibition, Psychological , Affect
11.
PLoS Comput Biol ; 18(3): e1009897, 2022 03.
Article in English | MEDLINE | ID: mdl-35333867

ABSTRACT

There is no single way to represent a task. Indeed, despite experiencing the same task events and contingencies, different subjects may form distinct task representations. As experimenters, we often assume that subjects represent the task as we envision it. However, such a representation cannot be taken for granted, especially in animal experiments where we cannot deliver explicit instruction regarding the structure of the task. Here, we tested how rats represent an odor-guided choice task in which two odor cues indicated which of two responses would lead to reward, whereas a third odor indicated free choice among the two responses. A parsimonious task representation would allow animals to learn from the forced trials what is the better option to choose in the free-choice trials. However, animals may not necessarily generalize across odors in this way. We fit reinforcement-learning models that use different task representations to trial-by-trial choice behavior of individual rats performing this task, and quantified the degree to which each animal used the more parsimonious representation, generalizing across trial types. Model comparison revealed that most rats did not acquire this representation despite extensive experience. Our results demonstrate the importance of formally testing possible task representations that can afford the observed behavior, rather than assuming that animals' task representations abide by the generative task structure that governs the experimental design.


Subject(s)
Odorants , Reward , Animals , Cues , Generalization, Psychological , Humans , Rats , Reinforcement, Psychology
12.
Psychol Rev ; 129(3): 513-541, 2022 04.
Article in English | MEDLINE | ID: mdl-34516150

ABSTRACT

Mood is an integrative and diffuse affective state that is thought to exert a pervasive effect on cognition and behavior. At the same time, mood itself is thought to fluctuate slowly as a product of feedback from interactions with the environment. Here we present a new computational theory of the valence of mood-the Integrated Advantage model-that seeks to account for this bidirectional interaction. Adopting theoretical formalisms from reinforcement learning, we propose to conceptualize the valence of mood as a leaky integral of an agent's appraisals of the Advantage of its actions. This model generalizes and extends previous models of mood wherein affective valence was conceptualized as a moving average of reward prediction errors. We give a full theoretical derivation of the Integrated Advantage model and provide a functional explanation of how an integrated-Advantage variable could be deployed adaptively by a biological agent to accelerate learning in complex and/or stochastic environments. Specifically, drawing on stochastic optimization theory, we propose that an agent can utilize our hypothesized form of mood to approximate a momentum-based update to its behavioral policy, thereby facilitating rapid learning of optimal actions. We then show how this model of mood provides a principled and parsimonious explanation for a number of contextual effects on mood from the affective science literature, including expectation- and surprise-related effects, counterfactual effects from information about foregone alternatives, action-typicality effects, and action/inaction asymmetry. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Affect , Reward , Cognition , Humans , Learning , Reinforcement, Psychology
13.
Front Behav Neurosci ; 15: 786900, 2021.
Article in English | MEDLINE | ID: mdl-34912199

ABSTRACT

[This corrects the article DOI: 10.3389/fnbeh.2013.00164.].

14.
Behav Neurosci ; 135(4): 487-497, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34291969

ABSTRACT

The orbitofrontal cortex (OFC) has been implicated in goal-directed planning and model-based decision-making. One key prerequisite for model-based decision-making is learning the transition structure of the environment-the probabilities of transitioning from one environmental state to another. In this work, we investigated how the OFC might be involved in learning this transition structure, by using fMRI to assess OFC activity while humans experienced probabilistic cue-outcome transitions. We found that OFC activity was indeed correlated with behavioral measures of learning about transition structure. On a trial-by-trial basis, OFC activity was associated with subsequently increased expectation of the more probable outcome; that is, with subsequently more optimal cue-outcome predictions. Interestingly, this relationship was observed no matter what outcome occurred at the time of the OFC activity, and thus is inconsistent with an interpretation of the OFC activity as representing a "state prediction error" that would facilitate learning transitions via error-correcting mechanisms. Finally, OFC activity was related to more optimal predictions only for subsequent trials involving the same cue that was observed at the time of OFC activity-this relationship was not observed for subsequent trials involving a different cue. All together, these results indicate that the OFC is involved in updating or reinforcing a learned transition model on a trial-by-trial basis, specifically for the currently observed cue-outcome associations. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Learning , Prefrontal Cortex , Humans , Motivation
15.
Behav Neurosci ; 135(5): 601-609, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34096743

ABSTRACT

Understanding the brain requires us to answer both what the brain does, and how it does it. Using a series of examples, I make the case that behavior is often more useful than neuroscientific measurements for answering the first question. Moreover, I show that even for "how" questions that pertain to neural mechanism, a well-crafted behavioral paradigm can offer deeper insight and stronger constraints on computational and mechanistic models than do many highly challenging (and very expensive) neural studies. I conclude that purely behavioral research is essential for understanding the brain-especially its cognitive functions-contrary to the opinion of prominent funding bodies and some scientific journals, who erroneously place neural data on a pedestal and consider behavior to be subsidiary. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Behavioral Research , Brain , Cognition
16.
Behav Neurosci ; 135(2): 192-201, 2021 Apr.
Article in English | MEDLINE | ID: mdl-34060875

ABSTRACT

Much of traditional neuroeconomics proceeds from the hypothesis that value is reified in the brain, that is, that there are neurons or brain regions whose responses serve the discrete purpose of encoding value. This hypothesis is supported by the finding that the activity of many neurons covaries with subjective value as estimated in specific tasks, and has led to the idea that the primary function of the orbitofrontal cortex is to compute and signal economic value. Here we consider an alternative: That economic value, in the cardinal, common-currency sense, is not represented in the brain and used for choice by default. This idea is motivated by consideration of the economic concept of value, which places important epistemic constraints on our ability to identify its neural basis. It is also motivated by the behavioral economics literature, especially work on heuristics, which proposes value-free process models for much if not all of choice. Finally, it is buoyed by recent neural and behavioral findings regarding how animals and humans learn to choose between options. In light of our hypothesis, we critically reevaluate putative neural evidence for the representation of value and explore an alternative: direct learning of action policies. We delineate how this alternative can provide a robust account of behavior that concords with existing empirical data. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Choice Behavior , Prefrontal Cortex , Animals , Brain , Humans , Neurons
17.
Nat Hum Behav ; 5(9): 1180-1189, 2021 09.
Article in English | MEDLINE | ID: mdl-33686201

ABSTRACT

How do we evaluate a group of people after a few negative experiences with some members but mostly positive experiences otherwise? How do rare experiences influence our overall impression? We show that rare events may be overweighted due to normative inference of the hidden causes that are believed to generate the observed events. We propose a Bayesian inference model that organizes environmental statistics by combining similar events and separating outlying observations. Relying on the model's inferred latent causes for group evaluation overweights rare or variable events. We tested the model's predictions in eight experiments where participants observed a sequence of social or non-social behaviours and estimated their average. As predicted, estimates were biased toward sparse events when estimating after seeing all observations, but not when tracking a summary value as observations accrued. Our results suggest that biases in evaluation may arise from inferring the hidden causes of group members' behaviours.


Subject(s)
Interpersonal Relations , Motivation , Social Perception , Humans , Psychological Theory
18.
Elife ; 102021 03 04.
Article in English | MEDLINE | ID: mdl-33661094

ABSTRACT

Memory helps guide behavior, but which experiences from the past are prioritized? Classic models of learning posit that events associated with unpredictable outcomes as well as, paradoxically, predictable outcomes, deploy more attention and learning for those events. Here, we test reinforcement learning and subsequent memory for those events, and treat signed and unsigned reward prediction errors (RPEs), experienced at the reward-predictive cue or reward outcome, as drivers of these two seemingly contradictory signals. By fitting reinforcement learning models to behavior, we find that both RPEs contribute to learning by modulating a dynamically changing learning rate. We further characterize the effects of these RPE signals on memory and show that both signed and unsigned RPEs enhance memory, in line with midbrain dopamine and locus-coeruleus modulation of hippocampal plasticity, thereby reconciling separate findings in the literature.


Subject(s)
Learning , Memory , Reinforcement, Psychology , Reward , Dopamine/metabolism , Humans
19.
Annu Rev Neurosci ; 44: 253-273, 2021 07 08.
Article in English | MEDLINE | ID: mdl-33730510

ABSTRACT

The central theme of this review is the dynamic interaction between information selection and learning. We pose a fundamental question about this interaction: How do we learn what features of our experiences are worth learning about? In humans, this process depends on attention and memory, two cognitive functions that together constrain representations of the world to features that are relevant for goal attainment. Recent evidence suggests that the representations shaped by attention and memory are themselves inferred from experience with each task. We review this evidence and place it in the context of work that has explicitly characterized representation learning as statistical inference. We discuss how inference can be scaled to real-world decisions by approximating beliefs based on a small number of experiences. Finally, we highlight some implications of this inference process for human decision-making in social environments.


Subject(s)
Cognition , Learning , Attention , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...