Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 167
Filter
1.
Psychol Rev ; 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38635156

ABSTRACT

Perfectly rational decision making is almost always out of reach for people because their computational resources are limited. Instead, people may rely on computationally frugal heuristics that usually yield good outcomes. Although previous research has identified many such heuristics, discovering good heuristics and predicting when they will be used remains challenging. Here, we present a theoretical framework that allows us to use methods from machine learning to automatically derive the best heuristic to use in any given situation by considering how to make the best use of limited cognitive resources. To demonstrate the generalizability and accuracy of our method, we compare the heuristics it discovers against those used by people across a wide range of multi-attribute risky choice environments in a behavioral experiment that is an order of magnitude larger than any previous experiments of its type. Our method rediscovered known heuristics, identifying them as rational strategies for specific environments, and discovered novel heuristics that had been previously overlooked. Our results show that people adapt their decision strategies to the structure of the environment and generally make good use of their limited cognitive resources, although their strategy choices do not always fully exploit the structure of the environment. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Psychon Bull Rev ; 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38366264

ABSTRACT

How people represent categories and how those representations change over time is a basic question about human cognition. Previous research has demonstrated that people categorize objects by comparing them to category prototypes in early stages of learning but consider the individual exemplars within each category in later stages. However, these results do not seem consistent with findings in the memory literature showing that it becomes increasingly easier to access representations of general knowledge than representations of specific items over time. Why would one rely more on exemplar-based representations in later stages of categorization when it is more difficult to access these exemplars in memory? To reconcile these incongruities, our study proposed that previous findings on categorization are a result of human participants adapting to a specific experimental environment, in which the probability of encountering an object stays uniform over time. In a more realistic environment, however, one would be less likely to encounter the same object if a long time has passed. Confirming our hypothesis, we demonstrated that under environmental statistics identical to typical categorization experiments the advantage of exemplar-based categorization over prototype-based categorization increases over time, replicating previous research in categorization. In contrast, under realistic environmental statistics simulated by our experiments the advantage of exemplar-based categorization over prototype-based categorization decreases over time. A second set of experiments replicated our results, while additionally demonstrating that human categorization is sensitive to the category structure presented to the participants. These results provide converging evidence that human categorization adapts appropriately to environmental statistics.

3.
Behav Brain Sci ; 47: e65, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38311457

ABSTRACT

Commentaries on the target article offer diverse perspectives on integrative experiment design. Our responses engage three themes: (1) Disputes of our characterization of the problem, (2) skepticism toward our proposed solution, and (3) endorsement of the solution, with accompanying discussions of its implementation in existing work and its potential for other domains. Collectively, the commentaries enhance our confidence in the promise and viability of integrative experiment design, while highlighting important considerations about how it is used.


Subject(s)
Dissent and Disputes
4.
J Exp Psychol Gen ; 153(3): 573-589, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38386385

ABSTRACT

Shepard's universal law of generalization is a remarkable hypothesis about how intelligent organisms should perceive similarity. In its broadest form, the universal law states that the level of perceived similarity between a pair of stimuli should decay as a concave function of their distance when embedded in an appropriate psychological space. While extensively studied, evidence in support of the universal law has relied on low-dimensional stimuli and small stimulus sets that are very different from their real-world counterparts. This is largely because pairwise comparisons-as required for similarity judgments-scale quadratically in the number of stimuli. We provide strong evidence for the universal law in a naturalistic high-dimensional regime by analyzing an existing data set of 214,200 human similarity judgments and a newly collected data set of 390,819 human generalization judgments (N = 2,406 U.S. participants) across three sets of natural images. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Generalization, Psychological , Intelligence , Humans , Judgment
5.
Psychol Sci ; 35(1): 55-71, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38175943

ABSTRACT

We often use cues from our environment when we get stuck searching our memories, but prior research has failed to show benefits of cuing with other, randomly selected list items during memory search. What accounts for this discrepancy? We proposed that cues' content critically determines their effectiveness and sought to select the right cues by building a computational model of how cues affect memory search. Participants (N = 195 young adults from the United States) recalled significantly more items when receiving our model's best (vs. worst) cue. Our model provides an account of why some cues better aid recall: Effective cues activate contexts most similar to the remaining items' contexts, facilitating recall in an unsearched area of memory. We discuss our contributions in relation to prominent theories about the effect of external cues.


Subject(s)
Cues , Mental Recall , Young Adult , Humans , Mental Recall/physiology
6.
Psychol Rev ; 131(1): 194-230, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37589706

ABSTRACT

People use language to influence others' beliefs and actions. Yet models of communication have diverged along these lines, formalizing the speaker's objective in terms of either the listener's beliefs or actions. We argue that this divergence lies at the root of a longstanding controversy over the Gricean maxims of truthfulness and relevance. We first bridge the divide by introducing a speaker model which considers both the listener's beliefs (epistemic utility) and their actions (decision-theoretic utility). We show that formalizing truthfulness as an epistemic utility and relevance as a decision-theoretic utility reconciles the tension between them, readily explaining puzzles such as context-dependent standards of truthfulness. We then test a set of novel predictions generated by our model. We introduce a new signaling game which decouples utterances' truthfulness and relevance, then use it to conduct a pair of experiments. Our first experiment demonstrates that participants jointly maximize epistemic and decision-theoretic utility, rather than either alone. Our second experiment shows that when the two conflict, participants make a graded tradeoff rather than prioritizing one over the other. These results demonstrate that human communication cannot be reduced to influencing beliefs or actions alone. Taken together, our work provides a new foundation for grounding rational communication not only in what we believe, but in what those beliefs lead us to do. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Communication , Language , Humans
7.
Psychol Rev ; 131(3): 781-811, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37732967

ABSTRACT

Most of us have experienced moments when we could not recall some piece of information but felt that it was just out of reach. Research in metamemory has established that such judgments are often accurate; but what adaptive purpose do they serve? Here, we present an optimal model of how metacognitive monitoring (feeling of knowing) could dynamically inform metacognitive control of memory (the direction of retrieval efforts). In two experiments, we find that, consistent with the optimal model, people report having a stronger memory for targets they are likely to recall and direct their search efforts accordingly, cutting off the search when it is unlikely to succeed and prioritizing the search for stronger memories. Our results suggest that metamemory is indeed adaptive and motivate the development of process-level theories that account for the dynamic interplay between monitoring and control. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Metacognition , Humans , Memory , Mental Recall , Judgment , Emotions
8.
Psychol Rev ; 130(6): 1457-1491, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37917444

ABSTRACT

People's decisions often deviate from classical notions of rationality, incurring costs to themselves and society. One way to reduce the costs of poor decisions is to redesign the decision problems people face to encourage better choices. While often subtle, these nudges can have dramatic effects on behavior and are increasingly popular in public policy, health care, and marketing. Although nudges are often designed with psychological theories in mind, they are typically not formalized in computational terms and their effects can be hard to predict. As a result, designing nudges can be difficult and time-consuming. To address this challenge, we propose a computational framework for understanding and predicting the effects of nudges. Our approach builds on recent work modeling human decision making as adaptive use of limited cognitive resources, an approach called resource-rational analysis. In our framework, nudges change the metalevel problem the agent faces-that is, the problem of how to make a decision. This changes the optimal sequence of cognitive operations an agent should execute, which in turn influences their behavior. We show that models based on this framework can account for known effects of nudges based on default options, suggested alternatives, and information highlighting. In each case, we validate the model's predictions in an experimental process-tracing paradigm. We then show how the framework can be used to automatically construct optimal nudges, and demonstrate that these nudges improve people's decisions more than intuitive heuristic approaches. Overall, our results show that resource-rational analysis is a promising framework for formally characterizing and constructing nudges. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Choice Behavior , Decision Making , Humans , Heuristics
9.
Nat Hum Behav ; 7(11): 1855-1868, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37985914

ABSTRACT

The ability of humans to create and disseminate culture is often credited as the single most important factor of our success as a species. In this Perspective, we explore the notion of 'machine culture', culture mediated or generated by machines. We argue that intelligent machines simultaneously transform the cultural evolutionary processes of variation, transmission and selection. Recommender algorithms are altering social learning dynamics. Chatbots are forming a new mode of cultural transmission, serving as cultural models. Furthermore, intelligent machines are evolving as contributors in generating cultural traits-from game strategies and visual art to scientific results. We provide a conceptual framework for studying the present and anticipated future impact of machines on cultural evolution, and present a research agenda for the study of machine culture.


Subject(s)
Cultural Evolution , Hominidae , Humans , Animals , Culture , Learning
10.
Nat Hum Behav ; 7(12): 2084-2098, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37845518

ABSTRACT

Large-scale social networks are thought to contribute to polarization by amplifying people's biases. However, the complexity of these technologies makes it difficult to identify the mechanisms responsible and evaluate mitigation strategies. Here we show under controlled laboratory conditions that transmission through social networks amplifies motivational biases on a simple artificial decision-making task. Participants in a large behavioural experiment showed increased rates of biased decision-making when part of a social network relative to asocial participants in 40 independently evolving populations. Drawing on ideas from Bayesian statistics, we identify a simple adjustment to content-selection algorithms that is predicted to mitigate bias amplification by generating samples of perspectives from within an individual's network that are more representative of the wider population. In two large experiments, this strategy was effective at reducing bias amplification while maintaining the benefits of information sharing. Simulations show that this algorithm can also be effective in more complex networks.


Subject(s)
Algorithms , Social Networking , Humans , Bayes Theorem , Bias , Motivation
11.
Psychol Sci ; 34(11): 1281-1292, 2023 11.
Article in English | MEDLINE | ID: mdl-37878525

ABSTRACT

Planning underpins the impressive flexibility of goal-directed behavior. However, even when planning, people can display surprising rigidity in how they think about problems (e.g., "functional fixedness") that lead them astray. How can our capacity for behavioral flexibility be reconciled with our susceptibility to conceptual inflexibility? We propose that these tendencies reflect avoidance of two cognitive costs: the cost of representing task details and the cost of switching between representations. To test this hypothesis, we developed a novel paradigm that affords participants opportunities to choose different families of simplified representations to plan. In two preregistered, online studies (Ns = 377 and 294 adults), we found that participants' optimal behavior, suboptimal behavior, and reaction time were explained by a computational model that formalized people's avoidance of representational complexity and switching. These results demonstrate how the selection of simplified, rigid representations leads to the otherwise puzzling combination of flexibility and inflexibility observed in problem solving.


Subject(s)
Cognition , Problem Solving , Adult , Humans , Reaction Time
12.
Behav Brain Sci ; 46: e275, 2023 09 28.
Article in English | MEDLINE | ID: mdl-37766644

ABSTRACT

The success of models of human behavior based on Bayesian inference over logical formulas or programs is taken as evidence that people employ a "language-of-thought" that has similarly discrete and compositional structure. We argue that this conclusion problematically crosses levels of analysis, identifying representations at the algorithmic level based on inductive biases at the computational level.


Subject(s)
Language , Humans , Bayes Theorem , Bias
13.
PLoS Comput Biol ; 19(8): e1011316, 2023 08.
Article in English | MEDLINE | ID: mdl-37624841

ABSTRACT

The ability to acquire abstract knowledge is a hallmark of human intelligence and is believed by many to be one of the core differences between humans and neural network models. Agents can be endowed with an inductive bias towards abstraction through meta-learning, where they are trained on a distribution of tasks that share some abstract structure that can be learned and applied. However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction. In this work, we compare the performance of humans and agents in a meta-reinforcement learning paradigm in which tasks are generated from abstract rules. We define a novel methodology for building "task metamers" that closely match the statistics of the abstract tasks but use a different underlying generative process, and evaluate performance on both abstract and metamer tasks. We find that humans perform better at abstract tasks than metamer tasks whereas common neural network architectures typically perform worse on the abstract tasks than the matched metamers. This work provides a foundation for characterizing differences between humans and machine learning that can be used in future work towards developing machines with more human-like behavior.


Subject(s)
Concept Formation , Machine Learning , Humans , Intelligence , Knowledge , Neural Networks, Computer
14.
Cogn Sci ; 47(8): e13330, 2023 08.
Article in English | MEDLINE | ID: mdl-37641424

ABSTRACT

We study human performance in two classical NP-hard optimization problems: Set Cover and Maximum Coverage. We suggest that Set Cover and Max Coverage are related to means selection problems that arise in human problem-solving and in pursuing multiple goals: The relationship between goals and means is expressed as a bipartite graph where edges between means and goals indicate which means can be used to achieve which goals. While these problems are believed to be computationally intractable in general, they become more tractable when the structure of the network resembles a tree. Thus, our main prediction is that people should perform better with goal systems that are more tree-like. We report three behavioral experiments which confirm this prediction. Our results suggest that combinatorial parameters that are instrumental to algorithm design can also be useful for understanding when and why people struggle to choose between multiple means to achieve multiple goals.


Subject(s)
Algorithms , Goals , Humans , Problem Solving
15.
PLoS Comput Biol ; 19(6): e1011087, 2023 06.
Article in English | MEDLINE | ID: mdl-37262023

ABSTRACT

Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions. How are these decompositions created and used? Here, we propose and evaluate a normative framework for task decomposition based on the simple idea that people decompose tasks to reduce the overall cost of planning while maintaining task performance. Analyzing 11,117 distinct graph-structured planning tasks, we find that our framework justifies several existing heuristics for task decomposition and makes predictions that can be distinguished from two alternative normative accounts. We report a behavioral study of task decomposition (N = 806) that uses 30 randomly sampled graphs, a larger and more diverse set than that of any previous behavioral study on this topic. We find that human responses are more consistent with our framework for task decomposition than alternative normative accounts and are most consistent with a heuristic-betweenness centrality-that is justified by our approach. Taken together, our results suggest the computational cost of planning is a key principle guiding the intelligent structuring of goal-directed behavior.


Subject(s)
Heuristics , Humans , Goals , Behavior
16.
Cognition ; 237: 105452, 2023 08.
Article in English | MEDLINE | ID: mdl-37054490

ABSTRACT

When we look at someone's face, we rapidly and automatically form robust impressions of how trustworthy they appear. Yet while people's impressions of trustworthiness show a high degree of reliability and agreement with one another, evidence for the accuracy of these impressions is weak. How do such appearance-based biases survive in the face of weak evidence? We explored this question using an iterated learning paradigm, in which memories relating (perceived) facial and behavioral trustworthiness were passed through many generations of participants. Stimuli consisted of pairs of computer-generated people's faces and exact dollar amounts that those fictional people shared with partners in a trust game. Importantly, the faces were designed to vary considerably along a dimension of perceived facial trustworthiness. Each participant learned (and then reproduced from memory) some mapping between the faces and the dollar amounts shared (i.e., between perceived facial and behavioral trustworthiness). Much like in the game of 'telephone', their reproductions then became the training stimuli initially presented to the next participant, and so on for each transmission chain. Critically, the first participant in each chain observed some mapping between perceived facial and behavioral trustworthiness, including positive linear, negative linear, nonlinear, and completely random relationships. Strikingly, participants' reproductions of these relationships showed a pattern of convergence in which more trustworthy looks were associated with more trustworthy behavior - even when there was no relationship between looks and behavior at the start of the chain. These results demonstrate the power of facial stereotypes, and the ease with which they can be propagated to others, even in the absence of any reliable origin of these stereotypes.


Subject(s)
Facial Expression , Trust , Humans , Reproducibility of Results , Learning , Conditioning, Operant
17.
Cogn Sci ; 47(4): e13262, 2023 04.
Article in English | MEDLINE | ID: mdl-37051879

ABSTRACT

Humans can learn complex functional relationships between variables from small amounts of data. In doing so, they draw on prior expectations about the form of these relationships. In three experiments, we show that people learn to adjust these expectations through experience, learning about the likely forms of the functions they will encounter. Previous work has used Gaussian processes-a statistical framework that extends Bayesian nonparametric approaches to regression-to model human function learning. We build on this work, modeling the process of learning to learn functions as a form of hierarchical Bayesian inference about the Gaussian process hyperparameters.


Subject(s)
Learning , Models, Psychological , Humans , Bayes Theorem , Normal Distribution
18.
J Exp Psychol Gen ; 152(9): 2695-2702, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37079827

ABSTRACT

Delayed gratification is an important focus of research, given its potential relationship to forms of behavior, such as savings, susceptibility to addiction, and pro-social behaviors. The COVID-19 pandemic may be one of the most consequential recent examples of this phenomenon, with people's willingness to delay gratification affecting their willingness to socially distance themselves. COVID-19 also provides a naturalistic context by which to evaluate the ecological validity of delayed gratification. This article outlines four large-scale online experiments (total N = 12, 906) where we ask participants to perform Money Earlier or Later (MEL) decisions (e.g., $5 today vs. $10 tomorrow) and to also report stress measures and pandemic mitigation behaviors. We found that stress increases impulsivity and that less stressed and more patient individuals socially distanced more throughout the pandemic. These results help resolve longstanding theoretical debates in the MEL literature as well as provide policymakers with scientific evidence that can help inform response strategies in the future. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
COVID-19 , Humans , Pandemics , Impulsive Behavior , Social Behavior , Forecasting , Choice Behavior/physiology
19.
Proc Natl Acad Sci U S A ; 120(12): e2214840120, 2023 03 21.
Article in English | MEDLINE | ID: mdl-36913582

ABSTRACT

How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players' strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.


Subject(s)
Artificial Intelligence , Decision Making , Humans
20.
Cogn Sci ; 47(1): e13232, 2023 01.
Article in English | MEDLINE | ID: mdl-36655981

ABSTRACT

Since the cognitive revolution, psychologists have developed formal theories of cognition by thinking about the mind as a computer. However, this metaphor is typically applied to individual minds. Humans rarely think alone; compared to other animals, humans are curiously dependent on stores of culturally transmitted skills and knowledge, and we are particularly good at collaborating with others. Rather than picturing the human mind as an isolated computer, we can imagine each mind as a node in a vast distributed system. Viewing human cognition through the lens of distributed systems motivates new questions about how humans share computation, when it makes sense to do so, and how we can build institutions to facilitate collaboration.


Subject(s)
Cognition , Metaphor , Animals , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...