RESUMEN
It is now well established that decision making can be susceptible to cognitive bias in a broad range of fields, with forensic science being no exception. Previously published research has revealed a bias blind spot in forensic science where examiners do not recognise bias within their own domain. A survey of 101 forensic anthropology practitioners (n = 52) and students (n = 38) was undertaken to assess their level of awareness of cognitive bias and investigate their attitudes towards cognitive bias within forensic anthropology. The results revealed that the forensic anthropology community (â¼90%) had a high level of awareness of cognitive bias. Overall â¼89% expressed concerns about cognitive bias in the broad discipline of forensic science, their own domain of forensic anthropology, and in the evaluative judgments they made in reconstruction activities, identifying a significant reduction in the bias blind spot. However, more than half of the participants believed that bias can be reduced by sheer force of will, and there was a lack of consensus about implementing blinding procedures or context management. These findings highlight the need to investigate empirically the feasibility of proposed mitigating strategies within the workflow of forensic anthropologists and their capabilities for increasing the transparency in decision making.
Asunto(s)
Actitud , Antropología Forense , Humanos , Antropología Forense/métodos , Encuestas y Cuestionarios , Masculino , Femenino , Sesgo , Cognición , Toma de Decisiones , AdultoRESUMEN
ACADEMIC ABSTRACT: Prominent theories of belief and metacognition make different predictions about how people evaluate their biased beliefs. These predictions reflect different assumptions about (a) people's conscious belief regulation goals and (b) the mechanisms and constraints underlying belief change. I argue that people exhibit heterogeneity in how they evaluate their biased beliefs. Sometimes people are blind to their biases, sometimes people acknowledge and condone them, and sometimes people resent them. The observation that people adopt a variety of "metacognitive positions" toward their beliefs provides insight into people's belief regulation goals as well as insight into way that belief formation is free and constrained. The way that people relate to their beliefs illuminates why they hold those beliefs. Identifying how someone thinks about their belief is useful for changing their mind. PUBLIC ABSTRACT: The same belief can be alternatively thought of as rational, careful, unfortunate, or an act of faith. These beliefs about one's beliefs are called "metacognitive positions." I review evidence that people hold at least four different metacognitive positions. For each position, I discuss what kinds of cognitive processes generated belief and what role people's values and preferences played in belief formation. We can learn a lot about someone's belief based on how they relate to that belief. Learning how someone relates to their belief is useful for identifying the best ways to try to change their mind.
RESUMEN
Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions.
Asunto(s)
Motivación , Solución de Problemas , Humanos , Sesgo , AlgoritmosRESUMEN
People judge repeated statements as more truthful than new statements: a truth effect. In three pre-registered experiments (N = 463), we examined whether people expect repetition to influence truth judgments more for others than for themselves: a bias blind spot in the truth effect. In Experiments 1 and 2, using moderately plausible and implausible statements, respectively, the test for the bias blind spot did not pass the significance threshold set for a two-step sequential analysis. Experiment 3 considered moderately plausible statements but with a larger sample of participants. Additionally, it compared actual performance after a two-day delay with participants' predictions for themselves and others. This time, we found clear evidence for a bias blind spot in the truth effect. Experiment 3 also showed that participants underestimated the magnitude of the truth effect, especially so for themselves, and that predictions and actual truth effect scores were not significantly related. Finally, an integrative analysis focusing on a more conservative between-participant approach found clear frequentist and Bayesian evidence for a bias blind spot. Overall, the results indicate that people (1) hold beliefs about the effect of repetition on truth judgments, (2) believe that this effect is larger for others than for themselves, (3) and underestimate the effect's magnitude, and (4) particularly so for themselves.
Asunto(s)
Juicio , Humanos , Teorema de Bayes , SesgoRESUMEN
People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an "illusion of objectivity." We identify an important domain of life in which people harbor little illusion about their biases - when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2-4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People's tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.
Asunto(s)
Ilusiones , Humanos , Solución de Problemas , Juicio , Principios Morales , EmocionesRESUMEN
BACKGROUND: Cognitive bias can lead to systematic errors in judgment. OBJECTIVE: We sought to assess cognitive bias in emergency physicians and compare the results to a sample of nonphysicians. METHODS: Selected emergency physicians were invited to take the Rationality Quotient (RQ) test, which measures cognitive biases. Control subjects were nonphysicians selected randomly from individuals who had taken the RQ test contemporaneously. We compared RQ scores overall and by bias and assessed the relationship between self-reported statistical knowledge and familiarity with decision-making biases and RQ scores. RESULTS: Of 150 physicians invited, 95 (63%) completed the RQ test. There was less bias in physicians compared with control subjects (RQ scores were 51.1 for physicians and 43.3 for control subjects, p < 0.001). There was less bias among physicians for both bias blind spot (15 vs. 14.3, p < 0.001) and for representative bias (10.4 vs. 5.2, p < 0.001). Anchoring bias, confirmation bias, projection bias, and attribution error were not significantly different. Emergency physicians with greater self-reported statistical familiarity (either 6 of 7 or 7 of 7 on a Likert scale) had higher RQ scores by 7.7 points (95% confidence interval 3.1-12.3)-i.e., they were less biased. There was no association between self-reported knowledge of decision biases and RQ scores. CONCLUSION: Cognitive biases were common in this sample of emergency physicians, and physicians demonstrated less bias than control subjects. Variability was mostly attributed to 2 biases: bias blind spot and representative bias.
Asunto(s)
Sesgo , Médicos/psicología , Adulto , Medicina de Emergencia/normas , Medicina de Emergencia/tendencias , Femenino , Humanos , Modelos Lineales , Masculino , Persona de Mediana Edad , Médicos/normas , Médicos/estadística & datos numéricos , Proyectos Piloto , Psicometría/instrumentación , Psicometría/métodos , Autoinforme , Encuestas y CuestionariosRESUMEN
Overestimation of one's ability to argue their position on socio-political issues may partially underlie the current climate of political extremism in the U.S. Yet very little is known about what factors influence overestimation in argumentation of socio-political issues. Across three experiments, emotional investment substantially increased participants' overestimation. Potential confounding factors like topic complexity and familiarity were ruled out as alternative explanations (Experiments 1-3). Belief-based cues were established as a mechanism underlying the relationship between emotional investment and overestimation in a measurement-of-mediation (Experiment 2) and manipulation-of-mediator (Experiment 3) design. Representing a new bias blind spot, participants believed emotional investment helps them argue better than it helps others (Experiments 2 and 3); where in reality emotional investment harmed or had no effect on argument quality. These studies highlight misguided beliefs about emotional investment as a factor underlying metacognitive miscalibration in the context of socio-political issues.
Asunto(s)
Conflicto Psicológico , Emociones/fisiología , Metacognición/fisiología , Autoimagen , Adolescente , Adulto , Anciano , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Motivación , Proyectos Piloto , Política , Condiciones Sociales , Adulto JovenRESUMEN
We surveyed evaluators who conduct sexually violent predator evaluations ( N = 95) regarding the frequency with which they use the Psychopathy Checklist-Revised (PCL-R), their rationale for use, and scoring practices. Findings suggest that evaluators use the PCL-R in sexually violent predator cases because of its perceived versatility, providing information about both mental disorder and risk. Several findings suggested gaps between research and routine practice. For example, relatively few evaluators reported providing the factor and facet scores that may be the strongest predictors of future offending, and many assessed the combination of PCL-R scores and sexual deviance using deviance measures (e.g., paraphilia diagnoses) that have not been examined in available studies. There was evidence of adversarial allegiance in PCL-R score interpretation, as well as a "bias blind spot" in PCL-R and other risk measure (Static-99R) scoring; evaluators tended to acknowledge the possibility of bias in other evaluators but not in themselves. Findings suggest the need for evaluators to carefully consider the extent to which their practices are consistent with emerging research and to be attuned to the possibility that working in adversarial settings may influence their scoring and interpretation practices.
Asunto(s)
Trastorno de Personalidad Antisocial/diagnóstico , Lista de Verificación , Criminales/psicología , Delitos Sexuales/psicología , Violencia/psicología , Trastorno de Personalidad Antisocial/psicología , Psiquiatría Forense , Humanos , Masculino , Psicometría , Reproducibilidad de los ResultadosRESUMEN
People tend not to recognize bias in their judgments. Such "bias blindness" persists, we show, even when people acknowledge that the judgmental strategies preceding their judgments are biased. In Experiment 1, participants took a test, received failure feedback, and then were led to assess the test's quality via an explicitly biased strategy (focusing on the test's weaknesses), an explicitly objective strategy, or a strategy of their choice. In Experiments 2 and 3, participants rated paintings using an explicitly biased or explicitly objective strategy. Across the three experiments, participants who used a biased strategy rated it as relatively biased, provided biased judgments, and then claimed to be relatively objective. Participants in Experiment 3 also assessed how biased they expected to be by their strategy, prior to using it. These pre-ratings revealed that not only did participants' sense of personal objectivity survive using a biased strategy, it grew stronger.