Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Cognition ; 229: 105254, 2022 12.
Article in English | MEDLINE | ID: mdl-36029552

ABSTRACT

The desirability bias (or wishful thinking effect) refers to when a person's desire regarding an event's occurrence has an unwarranted, optimistic influence on expectations about that event. Past experimental tests of this effect have been dominated by paradigms in which uncertainty about the target event is purely stochastic-i.e., involving only aleatory uncertainty. In six studies, we detected desirability biases using two new paradigms in which people made predictions about events for which their uncertainty was both aleatory and epistemic. We tested and meta-analyzed the impact of two potential moderators: the strength of evidence and the level of stochasticity. In support of the first moderator hypothesis, desirability biases were larger when people were making predictions about events for which the evidence for the possible outcomes was of similar strength (vs. not of similar strength). Regarding the second moderator hypothesis, the overall results did not support the notion that the desirability bias would be larger when the target event was higher vs. lower in stochasticity, although there was some significant evidence for moderation in one of the two paradigms. The findings broaden the generalizability of the desirability bias in predictions, yet they also reveal boundaries to an account of how stochasticity might provide affordances for optimistically biased predictions.


Subject(s)
Uncertainty , Bias , Humans
2.
PLoS One ; 16(2): e0245969, 2021.
Article in English | MEDLINE | ID: mdl-33571207

ABSTRACT

When making decisions involving risk, people may learn about the risk from descriptions or from experience. The description-experience gap refers to the difference in decision patterns driven by this discrepancy in learning format. Across two experiments, we investigated whether learning from description versus experience differentially affects the direction and the magnitude of a context effect in risky decision making. In Study 1 and 2, a computerized game called the Decisions about Risk Task (DART) was used to measure people's risk-taking tendencies toward hazard stimuli that exploded probabilistically. The rate at which a context hazard caused harm was manipulated, while the rate at which a focal hazard caused harm was held constant. The format by which this information was learned was also manipulated; it was learned primarily by experience or by description. The results revealed that participants' behavior toward the focal hazard varied depending on what they had learned about the context hazard. Specifically, there were contrast effects in which participants were more likely to choose a risky behavior toward the focal hazard when the harm rate posed by the context hazard was high rather than low. Critically, these contrast effects were of similar strength irrespective of whether the risk information was learned from experience or description. Participants' verbal assessments of risk likelihood also showed contrast effects, irrespective of learning format. Although risk information about a context hazard in DART does nothing to affect the objective expected value of risky versus safe behaviors toward focal hazards, it did affect participants' perceptions and behaviors-regardless of whether the information was learned from description or experience. Our findings suggest that context has a broad-based role in how people assess and make decisions about hazards.


Subject(s)
Decision Making , Risk-Taking , Female , Humans , Male , Probability , Young Adult
3.
Acta Psychol (Amst) ; 176: 39-46, 2017 May.
Article in English | MEDLINE | ID: mdl-28351001

ABSTRACT

People often estimate the average duration of several events (e.g., on average, how long does it take to drive from one's home to his or her office). While there is a great deal of research investigating estimates of duration for a single event, few studies have examined estimates when people must average across numerous stimuli or events. The current studies were designed to fill this gap by examining how people's estimates of average duration were influenced by the number of stimuli being averaged (i.e., the sample size). Based on research investigating the sample size bias, we predicted that participants' judgments of average duration would increase as the sample size increased. Across four studies, we demonstrated a sample size bias for estimates of average duration with different judgment types (numeric estimates and comparisons), study designs (between and within-subjects), and paradigms (observing images and performing tasks). The results are consistent with the more general notion that psychological representations of magnitudes in one dimension (e.g., quantity) can influence representations of magnitudes in another dimension (e.g., duration).


Subject(s)
Judgment , Mental Processes , Sample Size , Statistics as Topic/methods , Bias , Female , Humans , Male , Retrospective Studies , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...