Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 182
Filter
1.
medRxiv ; 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38947009

ABSTRACT

Individuals with major depressive disorder (MDD) can experience reduced motivation and cognitive function, leading to challenges with goal-directed behavior. When selecting goals, people maximize 'expected value' by selecting actions that maximize potential reward while minimizing associated costs, including effort 'costs' and the opportunity cost of time. In MDD, differential weighing of costs and benefits are theorized mechanisms underlying changes in goal-directed cognition and may contribute to symptom heterogeneity. We used the Effort Foraging Task to quantify cognitive and physical effort costs, and patch leaving thresholds in low effort conditions (hypothesized to reflect perceived opportunity cost of time) and investigated their shared versus distinct relationships to clinical features in participants with MDD (N=52, 43 in-episode) and comparisons (N=27). Contrary to our predictions, none of the decision-making measures differed with MDD diagnosis. However, each of the measures were related to symptom severity, over and above effects of ability (i.e., performance). Greater anxiety symptoms were selectively associated with lower cognitive effort cost (i.e. greater willingness to exert effort). Anhedonia symptoms were associated with increased physical effort costs. Finally, greater physical anergia was related to decreased patch leaving thresholds. Markers of effort-based decision-making may inform understanding of MDD heterogeneity. Increased willingness to exert cognitive effort may contribute to anxiety symptoms such as rumination and worry. The association of decreased leaving thresholds with symptom severity is consistent with reward rate-based accounts of reduced vigor in MDD. Future research should address subtypes of depression with or without anxiety, which may relate differentially to cognitive effort decisions.

2.
Open Mind (Camb) ; 8: 688-722, 2024.
Article in English | MEDLINE | ID: mdl-38828434

ABSTRACT

Human cognition is unique in its ability to perform a wide range of tasks and to learn new tasks quickly. Both abilities have long been associated with the acquisition of knowledge that can generalize across tasks and the flexible use of that knowledge to execute goal-directed behavior. We investigate how this emerges in a neural network by describing and testing the Episodic Generalization and Optimization (EGO) framework. The framework consists of an episodic memory module, which rapidly learns relationships between stimuli; a semantic pathway, which more slowly learns how stimuli map to responses; and a recurrent context module, which maintains a representation of task-relevant context information, integrates this over time, and uses it both to recall context-relevant memories (in episodic memory) and to bias processing in favor of context-relevant features and responses (in the semantic pathway). We use the framework to address empirical phenomena across reinforcement learning, event segmentation, and category learning, showing in simulations that the same set of underlying mechanisms accounts for human performance in all three domains. The results demonstrate how the components of the EGO framework can efficiently learn knowledge that can be flexibly generalized across tasks, furthering our understanding of how humans can quickly learn how to perform a wide range of tasks-a capability that is fundamental to human intelligence.

3.
Trends Cogn Sci ; 2024 May 09.
Article in English | MEDLINE | ID: mdl-38729852

ABSTRACT

A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.

4.
Psychol Rev ; 131(2): 563-577, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37956060

ABSTRACT

The N-back task is often considered to be a canonical example of a task that relies on working memory (WM), requiring both maintenance of representations of previously presented stimuli and also processing of these representations. In particular, the set-size effect in this task (e.g., poorer performance on three-back than two-back judgments), as in others, is often interpreted as indicating that the task relies on retention and processing of information in a limited-capacity WM system. Here, we consider an alternative possibility: that retention in episodic memory (EM) rather than WM can account for both set-size and lure effects in the N-back task. Accordingly, performance in the N-back task may reflect engagement of the processing ("working") function of WM but not necessarily limits in either that processing ability nor in retention ("memory"). To demonstrate this point, we constructed a neural network model that was augmented with an EM component, but lacked any capacity to retain information across trials in WM, and trained it to perform the N-back task. We show that this model can account for the set-size and lure effects obtained in an N-back study by M. J. Kane et al. (2007), and that it does so as a result of the well-understood effects of temporal distinctiveness on EM retrieval, and the processing of this information in WM. These findings help illuminate the ways in which WM may interact with EM in the service of cognitive function and add to a growing body of evidence that tasks commonly assumed to rely on WM may alternatively (or additionally) rely on EM. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Memory, Episodic , Memory, Short-Term , Humans , Cognition , Judgment
5.
Proc Natl Acad Sci U S A ; 120(50): e2221510120, 2023 Dec 12.
Article in English | MEDLINE | ID: mdl-38064507

ABSTRACT

Effort-based decisions, in which people weigh potential future rewards against effort costs required to achieve those rewards involve both cognitive and physical effort, though the mechanistic relationship between them is not yet understood. Here, we use an individual differences approach to isolate and measure the computational processes underlying effort-based decisions and test the association between cognitive and physical domains. Patch foraging is an ecologically valid reward rate maximization problem with well-developed theoretical tools. We developed the Effort Foraging Task, which embedded cognitive or physical effort into patch foraging, to quantify the cost of both cognitive and physical effort indirectly, by their effects on foraging choices. Participants chose between harvesting a depleting patch, or traveling to a new patch that was costly in time and effort. Participants' exit thresholds (reflecting the reward they expected to receive by harvesting when they chose to travel to a new patch) were sensitive to cognitive and physical effort demands, allowing us to quantify the perceived effort cost in monetary terms. The indirect sequential choice style revealed effort-seeking behavior in a minority of participants (preferring high over low effort) that has apparently been missed by many previous approaches. Individual differences in cognitive and physical effort costs were positively correlated, suggesting that these are perceived and processed in common. We used canonical correlation analysis to probe the relationship of task measures to self-reported affect and motivation, and found correlations of cognitive effort with anxiety, cognitive function, behavioral activation, and self-efficacy, but no similar correlations with physical effort.


Subject(s)
Decision Making , Physical Exertion , Humans , Decision Making/physiology , Physical Exertion/physiology , Individuality , Cognition/physiology , Reward , Motivation
6.
Psychol Sci ; 34(11): 1281-1292, 2023 11.
Article in English | MEDLINE | ID: mdl-37878525

ABSTRACT

Planning underpins the impressive flexibility of goal-directed behavior. However, even when planning, people can display surprising rigidity in how they think about problems (e.g., "functional fixedness") that lead them astray. How can our capacity for behavioral flexibility be reconciled with our susceptibility to conceptual inflexibility? We propose that these tendencies reflect avoidance of two cognitive costs: the cost of representing task details and the cost of switching between representations. To test this hypothesis, we developed a novel paradigm that affords participants opportunities to choose different families of simplified representations to plan. In two preregistered, online studies (Ns = 377 and 294 adults), we found that participants' optimal behavior, suboptimal behavior, and reaction time were explained by a computational model that formalized people's avoidance of representational complexity and switching. These results demonstrate how the selection of simplified, rigid representations leads to the otherwise puzzling combination of flexibility and inflexibility observed in problem solving.


Subject(s)
Cognition , Problem Solving , Adult , Humans , Reaction Time
7.
PLoS Comput Biol ; 19(8): e1011316, 2023 08.
Article in English | MEDLINE | ID: mdl-37624841

ABSTRACT

The ability to acquire abstract knowledge is a hallmark of human intelligence and is believed by many to be one of the core differences between humans and neural network models. Agents can be endowed with an inductive bias towards abstraction through meta-learning, where they are trained on a distribution of tasks that share some abstract structure that can be learned and applied. However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction. In this work, we compare the performance of humans and agents in a meta-reinforcement learning paradigm in which tasks are generated from abstract rules. We define a novel methodology for building "task metamers" that closely match the statistics of the abstract tasks but use a different underlying generative process, and evaluate performance on both abstract and metamer tasks. We find that humans perform better at abstract tasks than metamer tasks whereas common neural network architectures typically perform worse on the abstract tasks than the matched metamers. This work provides a foundation for characterizing differences between humans and machine learning that can be used in future work towards developing machines with more human-like behavior.


Subject(s)
Concept Formation , Machine Learning , Humans , Intelligence , Knowledge , Neural Networks, Computer
8.
Proc Natl Acad Sci U S A ; 120(28): e2221180120, 2023 07 11.
Article in English | MEDLINE | ID: mdl-37399387

ABSTRACT

Satisfying a variety of conflicting needs in a changing environment is a fundamental challenge for any adaptive agent. Here, we show that designing an agent in a modular fashion as a collection of subagents, each dedicated to a separate need, powerfully enhanced the agent's capacity to satisfy its overall needs. We used the formalism of deep reinforcement learning to investigate a biologically relevant multiobjective task: continually maintaining homeostasis of a set of physiologic variables. We then conducted simulations in a variety of environments and compared how modular agents performed relative to standard monolithic agents (i.e., agents that aimed to satisfy all needs in an integrated manner using a single aggregate measure of success). Simulations revealed that modular agents a) exhibited a form of exploration that was intrinsic and emergent rather than extrinsically imposed; b) were robust to changes in nonstationary environments, and c) scaled gracefully in their ability to maintain homeostasis as the number of conflicting objectives increased. Supporting analysis suggested that the robustness to changing environments and increasing numbers of needs were due to intrinsic exploration and efficiency of representation afforded by the modular architecture. These results suggest that the normative principles by which agents have adapted to complex changing environments may also explain why humans have long been described as consisting of "multiple selves."


Subject(s)
Learning , Reinforcement, Psychology , Humans , Learning/physiology , Homeostasis
9.
Isr Med Assoc J ; 25(6): 434-437, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37381940

ABSTRACT

BACKGROUND: A limited program for kidney donation from uncontrolled donation after cardiocirculatory determination of death (uDCDD) was implemented at four hospitals in Israel in close cooperation with Magen David Adom (MDA), the national emergency medical service. OBJECTIVES: To assess the outcome of transplantations performed between January 2017 and June 2022. METHODS: Donor data included age, sex, and cause of death. Recipient data included age, sex, and yearly serum creatinine levels. A retrospective study of out-of-hospital cardiac arrest cases treated by MDA during 2021 were analyzed to assess their compatibility as potential uDCDD donors. RESULTS: In total, 49 potential donors were referred to hospitals by MDA. Consent was obtained in 40 cases (83%), organ retrieval was performed in 28 cases, and 40 kidneys were transplanted from 21 donors (75% retrieval rate). At 1-year follow-up, 36 recipients had a functioning graft (4 returned to dialysis) and mean serum creatinine 1.59 ± 0.92 mg% (90% graft survival). Outcome after transplantation showed serum creatinine levels (mg%) at 2 years 1.41 ± 0.83, n=26; 3 years 1.48 ± 0.99, n=16; 4 years 1.07 ± 1.06, n=7; and 5 years 1.12 ± 0.31, n=5. One patient died of multiple myeloma at 3 years. The MDA audit revealed an unutilized pool of 125 potential cases, 90 of whom were transported to hospitals and 35 were declared dead at the scene. CONCLUSIONS: Transplant outcomes were encouraging, suggesting that more intensive implementation of the program may increase the number of kidneys transplanted, thus shortening recipient waiting lists.


Subject(s)
Kidney Transplantation , Humans , Israel/epidemiology , Creatinine , Retrospective Studies , Death
10.
Cogn Affect Behav Neurosci ; 23(3): 645-665, 2023 06.
Article in English | MEDLINE | ID: mdl-37316611

ABSTRACT

Expectations can inform fast, accurate decisions. But what informs expectations? Here we test the hypothesis that expectations are set by dynamic inference from memory. Participants performed a cue-guided perceptual decision task with independently-varying memory and sensory evidence. Cues established expectations by reminding participants of past stimulus-stimulus pairings, which predicted the likely target in a subsequent noisy image stream. Participant's responses used both memory and sensory information, in accordance to their relative reliability. Formal model comparison showed that the sensory inference was best explained when its parameters were set dynamically at each trial by evidence sampled from memory. Supporting this model, neural pattern analysis revealed that responses to the probe were modulated by the specific content and fidelity of memory reinstatement that occurred before the probe appeared. Together, these results suggest that perceptual decisions arise from the continuous sampling of memory and sensory evidence.


Subject(s)
Cues , Memory , Humans , Reproducibility of Results
11.
Neuron ; 111(10): 1526-1530, 2023 05 17.
Article in English | MEDLINE | ID: mdl-37100054

ABSTRACT

Neuroscience, cognitive science, and computer science are increasingly benefiting through their interactions. This could be accelerated by direct sharing of computational models across disparate modeling software used in each. We describe a Model Description Format designed to meet this challenge.


Subject(s)
Cognitive Neuroscience , Neurosciences , Software , Machine Learning
12.
J Exp Psychol Gen ; 152(9): 2695-2702, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37079827

ABSTRACT

Delayed gratification is an important focus of research, given its potential relationship to forms of behavior, such as savings, susceptibility to addiction, and pro-social behaviors. The COVID-19 pandemic may be one of the most consequential recent examples of this phenomenon, with people's willingness to delay gratification affecting their willingness to socially distance themselves. COVID-19 also provides a naturalistic context by which to evaluate the ecological validity of delayed gratification. This article outlines four large-scale online experiments (total N = 12, 906) where we ask participants to perform Money Earlier or Later (MEL) decisions (e.g., $5 today vs. $10 tomorrow) and to also report stress measures and pandemic mitigation behaviors. We found that stress increases impulsivity and that less stressed and more patient individuals socially distanced more throughout the pandemic. These results help resolve longstanding theoretical debates in the MEL literature as well as provide policymakers with scientific evidence that can help inform response strategies in the future. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
COVID-19 , Humans , Pandemics , Impulsive Behavior , Social Behavior , Forecasting , Choice Behavior/physiology
13.
Isr Med Assoc J ; 24(8): 524-528, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35972013

ABSTRACT

BACKGROUND: Changes accommodating requirements of religious authorities in Israel resulted in the Brain and Respiratory Death Determination Law (BRDDL), which came into effect in 2009. These included considering patient wishes regarding the brain respiratory death determination (BRDD), mandatory performance of apnea and ancillary testing, establishment of an accreditation committee, and accreditation required for physicians performing BRDD. OBJECTIVES: To assess the impact of the legislation from 2010-2019. METHODS: Data collected included the number of formal BRDDs and accredited physicians. Obstacles to declaring brain death and interventions applied were identified. RESULTS: Obstacles included lack of trained physicians to perform BRDD and interpret ancillary test results, inability to perform apnea or ancillary testing, and non-approach to next-of-kin objecting to BRDD. Interventions included physician training courses, additional ancillary test options, and legal interpretation of patient wishes for non-determination of BRD. As a result, the number of non-determinations related to next-of-kin objecting decreased (26 in 2010 to 5 in 2019), inability to perform apnea or ancillary testing decreased (33 in 2010 to 2 in 2019), and number of physicians receiving accreditation increased (210 in 2010 to 456 in 2019). Last, the consent rate for organ donation increased from 49% to 60% in 2019. CONCLUSIONS: The initial decrease in BRDDs has reversed, thus enabling more approaches for organ donation. The increased consent rate may reflect in part the support of the rabbinate and confidence of the general public that BRDD is performed and monitored according to strict criteria.


Subject(s)
Brain Death , Tissue and Organ Procurement , Apnea/diagnosis , Brain , Brain Death/diagnosis , Humans , Israel
14.
J Neurosci ; 42(29): 5730-5744, 2022 07 20.
Article in English | MEDLINE | ID: mdl-35688627

ABSTRACT

In patch foraging tasks, animals must decide whether to remain with a depleting resource or to leave it in search of a potentially better source of reward. In such tasks, animals consistently follow the general predictions of optimal foraging theory (the marginal value theorem; MVT): to leave a patch when the reward rate in the current patch depletes to the average reward rate across patches. Prior studies implicate an important role for the anterior cingulate cortex (ACC) in foraging decisions based on MVT: within single trials, ACC activity increases immediately preceding foraging decisions, and across trials, these dynamics are modulated as the value of staying in the patch depletes to the average reward rate. Here, we test whether these activity patterns reflect dynamic encoding of decision-variables and whether these signals are directly involved in decision-making. We developed a leaky accumulator model based on the MVT that generates estimates of decision variables within and across trials, and tested model predictions against ACC activity recorded from male rats performing a patch foraging task. Model predicted changes in MVT decision variables closely matched rat ACC activity. Next, we pharmacologically inactivated ACC in male rats to test the contribution of these signals to decision-making. ACC inactivation had a profound effect on rats' foraging decisions and response times (RTs) yet rats still followed the MVT decision rule. These findings indicate that the ACC encodes foraging-related variables for reasons unrelated to patch-leaving decisions.SIGNIFICANCE STATEMENT The ability to make adaptive patch-foraging decisions, to remain with a depleting resource or search for better alternatives, is critical to animal well-being. Previous studies have found that anterior cingulate cortex (ACC) activity is modulated at different points in the foraging decision process, raising questions about whether the ACC guides ongoing decisions or serves a more general purpose of regulating cognitive control. To investigate the function of the ACC in foraging, the present study developed a dynamic model of behavior and neural activity, and tested model predictions using recordings and inactivation of ACC. Findings revealed that ACC continuously signals decision variables but that these signals are more likely used to monitor and regulate ongoing processes than to guide foraging decisions.


Subject(s)
Decision Making , Gyrus Cinguli , Animals , Decision Making/physiology , Gyrus Cinguli/physiology , Male , Rats , Reward
15.
Nature ; 606(7912): 129-136, 2022 06.
Article in English | MEDLINE | ID: mdl-35589843

ABSTRACT

One of the most striking features of human cognition is the ability to plan. Two aspects of human planning stand out-its efficiency and flexibility. Efficiency is especially impressive because plans must often be made in complex environments, and yet people successfully plan solutions to many everyday problems despite having limited cognitive resources1-3. Standard accounts in psychology, economics and artificial intelligence have suggested that human planning succeeds because people have a complete representation of a task and then use heuristics to plan future actions in that representation4-11. However, this approach generally assumes that task representations are fixed. Here we propose that task representations can be controlled and that such control provides opportunities to quickly simplify problems and more easily reason about them. We propose a computational account of this simplification process and, in a series of preregistered behavioural experiments, show that it is subject to online cognitive control12-14 and that people optimally balance the complexity of a task representation and its utility for planning and acting. These results demonstrate how strategically perceiving and conceiving problems facilitates the effective use of limited cognitive resources.


Subject(s)
Cognition , Executive Function , Efficiency , Heuristics , Humans , Models, Psychological
16.
Neuroimage ; 257: 119295, 2022 08 15.
Article in English | MEDLINE | ID: mdl-35580808

ABSTRACT

Real-time fMRI (RT-fMRI) neurofeedback has been shown to be effective in treating neuropsychiatric disorders and holds tremendous promise for future breakthroughs, both with regard to basic science and clinical applications. However, the prevalence of its use has been hampered by computing hardware requirements, the complexity of setting up and running an experiment, and a lack of standards that would foster collaboration. To address these issues, we have developed RT-Cloud (https://github.com/brainiak/rt-cloud), a flexible, cloud-based, open-source Python software package for the execution of RT-fMRI experiments. RT-Cloud uses standardized data formats and adaptable processing streams to support and expand open science in RT-fMRI research and applications. Cloud computing is a key enabling technology for advancing RT-fMRI because it eliminates the need for on-premise technical expertise and high-performance computing; this allows installation, configuration, and maintenance to be automated and done remotely. Furthermore, the scalability of cloud computing makes it easier to deploy computationally-demanding multivariate analyses in real time. In this paper, we describe how RT-Cloud has been integrated with open standards, including the Brain Imaging Data Structure (BIDS) standard and the OpenNeuro database, how it has been applied thus far, and our plans for further development and deployment of RT-Cloud in the coming years.


Subject(s)
Cloud Computing , Neurofeedback , Humans , Magnetic Resonance Imaging , Software
17.
Cogn Sci ; 46(2): e13085, 2022 02.
Article in English | MEDLINE | ID: mdl-35146779

ABSTRACT

Applying machine learning algorithms to automatically infer relationships between concepts from large-scale collections of documents presents a unique opportunity to investigate at scale how human semantic knowledge is organized, how people use it to make fundamental judgments ("How similar are cats and bears?"), and how these judgments depend on the features that describe concepts (e.g., size, furriness). However, efforts to date have exhibited a substantial discrepancy between algorithm predictions and human empirical judgments. Here, we introduce a novel approach to generating embeddings for this purpose motivated by the idea that semantic context plays a critical role in human judgment. We leverage this idea by constraining the topic or domain from which documents used for generating embeddings are drawn (e.g., referring to the natural world vs. transportation apparatus). Specifically, we trained state-of-the-art machine learning algorithms using contextually-constrained text corpora (domain-specific subsets of Wikipedia articles, 50+ million words each) and showed that this procedure greatly improved predictions of empirical similarity judgments and feature ratings of contextually relevant concepts. Furthermore, we describe a novel, computationally tractable method for improving predictions of contextually-unconstrained embedding models based on dimensionality reduction of their internal representation to a small number of contextually relevant semantic features. By improving the correspondence between predictions derived automatically by machine learning methods using vast amounts of data and more limited, but direct empirical measurements of human judgments, our approach may help leverage the availability of online corpora to better understand the structure of human semantic representations and how people make judgments based on those.


Subject(s)
Machine Learning , Semantics , Algorithms , Humans
18.
Behav Res Methods ; 54(2): 805-829, 2022 04.
Article in English | MEDLINE | ID: mdl-34357537

ABSTRACT

Experimental design is a key ingredient of reproducible empirical research. Yet, given the increasing complexity of experimental designs, researchers often struggle to implement ones that allow them to measure their variables of interest without confounds. SweetPea ( https://sweetpea-org.github.io/ ) is an open-source declarative language in Python, in which researchers can describe their desired experiment as a set of factors and constraints. The language leverages advances in areas of computer science to sample experiment sequences in an unbiased way. In this article, we provide an overview of SweetPea's capabilities, and demonstrate its application to the design of psychological experiments. Finally, we discuss current limitations of SweetPea, as well as potential applications to other domains of empirical research, such as neuroscience and machine learning.


Subject(s)
Language , Research Design , Computers , Humans , Machine Learning
19.
Psychol Rev ; 129(3): 564-585, 2022 04.
Article in English | MEDLINE | ID: mdl-34383523

ABSTRACT

Cognitive fatigue and boredom are two phenomenological states that reflect overt task disengagement. In this article, we present a rational analysis of the temporal structure of controlled behavior, which provides a formal account of these phenomena. We suggest that in controlling behavior, the brain faces competing behavioral and computational imperatives, and must balance them by tracking their opportunity costs over time. We use this analysis to flesh out previous suggestions that feelings associated with subjective effort, like cognitive fatigue and boredom, are the phenomenological counterparts of these opportunity cost measures, instead of reflecting the depletion of resources as has often been assumed. Specifically, we propose that both fatigue and boredom reflect the competing value of particular options that require foregoing immediate reward but can improve future performance: Fatigue reflects the value of offline computation (internal to the organism) to improve future decisions, while boredom signals the value of exploration (external in the world). We demonstrate that these accounts provide a mechanistically explicit and parsimonious account for a wide array of findings related to cognitive control, integrating and reimagining them under a single, formally rigorous framework. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Boredom , Reward , Brain , Cognition , Emotions , Humans
20.
Psychol Rev ; 128(5): 879-912, 2021 10.
Article in English | MEDLINE | ID: mdl-34516148

ABSTRACT

To make informed decisions in natural environments that change over time, humans must update their beliefs as new observations are gathered. Studies exploring human inference as a dynamical process that unfolds in time have focused on situations in which the statistics of observations are history-independent. Yet, temporal structure is everywhere in nature and yields history-dependent observations. Do humans modify their inference processes depending on the latent temporal statistics of their observations? We investigate this question experimentally and theoretically using a change-point inference task. We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli. As such, humans behave qualitatively in a Bayesian fashion but, quantitatively, deviate away from optimality. Perhaps more importantly, humans behave suboptimally in that their responses are not deterministic, but variable. We show that this variability itself is modulated by the temporal statistics of stimuli. To elucidate the cognitive algorithm that yields this behavior, we investigate a broad array of existing and new models that characterize different sources of suboptimal deviations away from Bayesian inference. While models with "output noise" that corrupts the response-selection process are natural candidates, human behavior is best described by sampling-based inference models, in which the main ingredient is a compressed approximation of the posterior, represented through a modest set of random samples and updated over time. This result comes to complement a growing literature on sample-based representation and learning in humans. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Adaptation, Physiological , Learning , Algorithms , Bayes Theorem , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...