Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS One ; 19(1): e0294815, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38170696

RESUMEN

This paper examines the fundamental problem of testimony. Much of what we believe to know we know in good part, or even entirely, through the testimony of others. The problem with testimony is that we often have very little on which to base estimates of the accuracy of our sources. Simulations with otherwise optimal agents examine the impact of this for the accuracy of our beliefs about the world. It is demonstrated both where social networks of information dissemination help and where they hinder. Most importantly, it is shown that both social networks and a common strategy for gauging the accuracy of our sources give rise to polarisation even for entirely accuracy motivated agents. Crucially these two factors interact, amplifying one another's negative consequences, and this side effect of communication in a social network increases with network size. This suggests a new causal mechanism by which social media may have fostered the increase in polarisation currently observed in many parts of the world.


Asunto(s)
Motivación , Red Social , Humanos , Comunicación , Conocimiento , Difusión de la Información
2.
Perspect Psychol Sci ; 19(2): 418-431, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38010950

RESUMEN

Our beliefs are inextricably shaped through communication with others. Furthermore, even conversation we conduct in pairs may itself be taking place across a wider, connected social network. Our communications, and with that our thoughts, are consequently typically those of individuals in collectives. This has fundamental consequences with respect to how our beliefs are shaped. This article examines the role of dependence on our beliefs and seeks to demonstrate its importance with respect to key phenomena involving collectives that have been taken to indicate irrationality. It is argued that (with the benefit of hindsight) these phenomena no longer seem surprising when one considers the multiple dependencies that govern information acquisition and the evaluation of cognitive agents in their normal (i.e., social) context.


Asunto(s)
Comunicación , Humanos
3.
Sci Commun ; 45(4): 539-554, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37994373

RESUMEN

Effective science communication is challenging when scientific messages are informed by a continually updating evidence base and must often compete against misinformation. We argue that we need a new program of science communication as collective intelligence-a collaborative approach, supported by technology. This would have four key advantages over the typical model where scientists communicate as individuals: scientific messages would be informed by (a) a wider base of aggregated knowledge, (b) contributions from a diverse scientific community, (c) participatory input from stakeholders, and (d) better responsiveness to ongoing changes in the state of knowledge.

4.
Cognition ; 240: 105586, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37595514

RESUMEN

Providing an explanation is a communicative act. It involves an explainee, a person who receives an explanation, and an explainer, a person (or sometimes a machine) who provides an explanation. The majority of research on explanation has focused on how explanations alter explainees' beliefs. However, one general feature of communicative acts is that they also provide information about the speaker (explainer). Work on argumentation suggests that the speaker's reliability interacts with the content of the speaker's message and has a significant impact on argument strength. In five experiments we explore the interplay between explanation, the explainee's confidence in what is being explained, and the explainer's reliability. Experiment 1 replicates results from previous literature on the impact of explanations on an explainee's confidence in what is being explained using real-world explanations. Experiments 2 and 3 show that providing an explanation not only impacts the explainee's confidence about what is being explained but also influences beliefs about the reliability of the explainer. Additionally, the two experiments demonstrate that the impact of explanation on the explainee's confidence is mediated by the reliability of the explainer. In Experiment 4, we experimentally manipulated the explainer's reliability and found that both the explainer's reliability and whether or not an explanation was provided have a significant effect on the explainee's confidence in what is being explained. In Experiment 5, we observed an interaction between providing an explanation and the explainer's reliability. Specifically, we found that providing an explanation has a significantly greater impact on the explainee's confidence in what is being explained when the explainer's reliability is low compared to when that reliability is high. Throughout the study we point to the important impact of background knowledge, warranting further studies on this matter.


Asunto(s)
Comunicación , Conocimiento , Humanos , Reproducibilidad de los Resultados
5.
Philos Trans A Math Phys Eng Sci ; 381(2251): 20220043, 2023 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-37271178

RESUMEN

In this paper, we bring together two closely related, but distinct, notions: argument and explanation. We clarify their relationship. We then provide an integrative review of relevant research on these notions, drawn both from the cognitive science and the artificial intelligence (AI) literatures. We then use this material to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.

6.
Cognition ; 236: 105419, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37104894

RESUMEN

How we judge the similarity between objects in the world is connected ultimately to how we represent those objects. It has been argued extensively that object representations in humans are 'structured' in nature, meaning that both individual features and the relations between them can influence similarity. In contrast, popular models within comparative psychology assume that nonhuman species appreciate only surface-level, featural similarities. By applying psychological models of structural and featural similarity (from conjunctive feature models to Tversky's Contrast Model) to visual similarity judgements from adult humans, chimpanzees, and gorillas, we demonstrate a cross-species sensitivity to complex structural information, particularly for stimuli that combine colour and shape. These results shed new light on the representational complexity of nonhuman apes, and the fundamental limits of featural coding in explaining object representation and similarity, which emerge strikingly across both human and nonhuman species.


Asunto(s)
Hominidae , Adulto , Animales , Humanos , Juicio , Pan troglodytes/psicología , Modelos Psicológicos , Reconocimiento Visual de Modelos
7.
Patterns (N Y) ; 3(12): 100635, 2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36569554

RESUMEN

Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)-both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express causal relationships. Most AI systems, however, are only able to capture associations or correlations in data, so interpreting them as casual would not be justified. In this perspective, we present two experiments (total n = 364) exploring the effects of CF explanations of AI systems' predictions on lay people's causal beliefs about the real world. In Experiment 1, we found that providing CF explanations of an AI system's predictions does indeed (unjustifiably) affect people's causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people's causal beliefs.

8.
Ann Am Acad Pol Soc Sci ; 700(1): 26-40, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36338265

RESUMEN

Most democracies seek input from scientists to inform policies. This can put scientists in a position of intense scrutiny. Here we focus on situations in which scientific evidence conflicts with people's worldviews, preferences, or vested interests. These conflicts frequently play out through systematic dissemination of disinformation or the spreading of conspiracy theories, which may undermine the public's trust in the work of scientists, muddy the waters of what constitutes truth, and may prevent policy from being informed by the best available evidence. However, there are also instances in which public opposition arises from legitimate value judgments and lived experiences. In this article, we analyze the differences between politically-motivated science denial on the one hand, and justifiable public opposition on the other. We conclude with a set of recommendations on tackling misinformation and understanding the public's lived experiences to preserve legitimate democratic debate of policy.

9.
Cognition ; 226: 105160, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35660344

RESUMEN

Base rate neglect refers to people's apparent tendency to underweight or even ignore base rate information when estimating posterior probabilities for events, such as the probability that a person with a positive cancer-test outcome actually does have cancer. While often replicated, almost all evidence for the phenomenon comes from studies that used problems with extremely low base rates, high hit rates, and low false alarm rates. It is currently unclear whether the effect generalizes to reasoning problems outside this "corner" of the entire problem space. Another limitation of previous studies is that they have focused on describing empirical patterns of the effect at the group level and not so much on the underlying strategies and individual differences. Here, we address these two limitations by testing participants on a broader problem space and modeling their responses at a single-participant level. We find that the empirical patterns that have served as evidence for base-rate neglect generalize to a larger problem space, albeit with large individual differences in the extent with which participants "neglect" base rates. In particular, we find a bi-modal distribution consisting of one group of participants who almost entirely ignore the base rate and another group who almost entirely account for it. This heterogeneity is reflected in the cognitive modeling results: participants in the former group were best captured by a linear-additive model, while participants in the latter group were best captured by a Bayesian model. We find little evidence for heuristic models. Altogether, these results suggest that the effect known as "base-rate neglect" generalizes to a large set of reasoning problems, but varies largely across participants and may need a reinterpretation in terms of the underlying cognitive mechanisms.


Asunto(s)
Cognición , Solución de Problemas , Teorema de Bayes , Humanos , Probabilidad
10.
Top Cogn Sci ; 14(3): 602-620, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35285151

RESUMEN

Consideration of collectives raises important questions about human rationality. This has long been known for questions about preferences, but it holds also with respect to beliefs. For one, there are contexts (such as voting) where we might care as much, or more, about the rationality of a collective than the rationality of the individuals it comprises. Here, a given standard may yield competing assessments at the individual and the collective level, thus giving rise to important normative questions. At the same time, seemingly rational strategies of individuals may have surprising consequences, or even fail, when exercised by individuals within collectives. This paper will illustrate these considerations with examples, provide an overview of different formal frameworks for understanding and assessing the beliefs of collectives, and it will illustrate how such frameworks can combine with simulations in order to elucidate epistemic norms.

11.
Nat Commun ; 13(1): 1029, 2022 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-35210420

RESUMEN

Cytotoxic T lymphocytes (CTL) kill malignant and infected cells through the directed release of cytotoxic proteins into the immunological synapse (IS). The cytotoxic protein granzyme B (GzmB) is released in its soluble form or in supramolecular attack particles (SMAP). We utilize synaptobrevin2-mRFP knock-in mice to isolate fusogenic cytotoxic granules in an unbiased manner and visualize them alone or in degranulating CTLs. We identified two classes of fusion-competent granules, single core granules (SCG) and multi core granules (MCG), with different diameter, morphology and protein composition. Functional analyses demonstrate that both classes of granules fuse with the plasma membrane at the IS. SCG fusion releases soluble GzmB. MCGs can be labelled with the SMAP marker thrombospondin-1 and their fusion releases intact SMAPs. We propose that CTLs use SCG fusion to fill the synaptic cleft with active cytotoxic proteins instantly and parallel MCG fusion to deliver latent SMAPs for delayed killing of refractory targets.


Asunto(s)
Sinapsis Inmunológicas , Linfocitos T Citotóxicos , Animales , Membrana Celular , Gránulos Citoplasmáticos/metabolismo , Sinapsis Inmunológicas/metabolismo , Ratones
12.
Cognition ; 220: 104990, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35026693

RESUMEN

Most of the claims we encounter in real life can be assigned some degree of plausibility, even if they are new to us. On Gilbert's (1991) influential account of belief formation, whereby understanding a sentence implies representing it as true, all new propositions are initially accepted, before any assessment of their veracity. As a result, plausibility cannot have any role in initial belief formation on this account. In order to isolate belief formation experimentally, Gilbert, Krull, and Malone (1990) employed a dual-task design: if a secondary task disrupts participants' evaluation of novel claims presented to them, then the initial encoding should be all there is, and if that initial encoding consistently renders claims 'true' (even where participants were told in the learning phase that the claims they had seen were false), then Gilbert's account is confirmed. In this pre-registered study, we replicate one of Gilbert et al.'s (1990) seminal studies ("The Hopi Language Experiment") while additionally introducing a plausibility variable. Our results show that Gilbert's 'truth bias' does not hold for implausible statements - instead, initial encoding seemingly renders implausible statements 'false'. As alternative explanations of this finding that would be compatible with Gilbert's account can be ruled out, it questions Gilbert's account.


Asunto(s)
Enfermedad de Gilbert , Bilirrubina , Glucuronosiltransferasa , Humanos
13.
Cognition ; 218: 104939, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34717257

RESUMEN

How people update their beliefs when faced with new information is integral to everyday life. A sizeable body of literature suggests that people's belief updating is optimistically biased, such that their beliefs are updated more in response to good news than bad news. However, recent research demonstrates that findings previously interpreted as evidence of optimistic belief updating may be the result of flaws in experimental design, rather than motivated reasoning. In light of this controversy, we conduct three pre-registered variations of the standard belief updating paradigm (combined N = 300) in which we test for asymmetric belief updating with neutral, non-valenced stimuli using analytic approaches found in previous research. We find evidence of seemingly biased belief updating with neutral stimuli - results that cannot be attributed to a motivational, valence-based, optimism account - and further show that there is uninterpretable variability across samples and analytic techniques. Jointly, these results serve to highlight the methodological flaws in current optimistic belief updating research.


Asunto(s)
Motivación , Optimismo , Humanos
14.
Risk Anal ; 42(6): 1155-1178, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34146433

RESUMEN

In many complex, real-world situations, problem solving and decision making require effective reasoning about causation and uncertainty. However, human reasoning in these cases is prone to confusion and error. Bayesian networks (BNs) are an artificial intelligence technology that models uncertain situations, supporting better probabilistic and causal reasoning and decision making. However, to date, BN methodologies and software require (but do not include) substantial upfront training, do not provide much guidance on either the model building process or on using the model for reasoning and reporting, and provide no support for building BNs collaboratively. Here, we contribute a detailed description and motivation for our new methodology and application, Bayesian ARgumentation via Delphi (BARD). BARD utilizes BNs and addresses these shortcomings by integrating (1) short, high-quality e-courses, tips, and help on demand; (2) a stepwise, iterative, and incremental BN construction process; (3) report templates and an automated explanation tool; and (4) a multiuser web-based software platform and Delphi-style social processes. The result is an end-to-end online platform, with associated online training, for groups without prior BN expertise to understand and analyze a problem, build a model of its underlying probabilistic causal structure, validate and reason with the causal model, and (optionally) use it to produce a written analytic report. Initial experiments demonstrate that, for suitable problems, BARD aids in reasoning and reporting. Comparing their effect sizes also suggests BARD's BN-building and collaboration combine beneficially and cumulatively.


Asunto(s)
Inteligencia Artificial , Programas Informáticos , Teorema de Bayes , Humanos , Solución de Problemas , Incertidumbre
15.
Nat Hum Behav ; 5(12): 1629-1635, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34112981

RESUMEN

The ubiquity of social media use and the digital data traces it produces has triggered a potential methodological shift in the psychological sciences away from traditional, laboratory-based experimentation. The hope is that, by using computational social science methods to analyse large-scale observational data from social media, human behaviour can be studied with greater statistical power and ecological validity. However, current standards of null hypothesis significance testing and correlational statistics seem ill-suited to markedly noisy, high-dimensional social media datasets. We explore this point by probing the moral contagion phenomenon, whereby the use of moral-emotional language increases the probability of message spread. Through out-of-sample prediction, model comparisons and specification curve analyses, we find that the moral contagion model performs no better than an implausible XYZ contagion model. This highlights the risks of using purely correlational evidence from large observational datasets and sounds a cautionary note for psychology's merge with big data.


Asunto(s)
Principios Morales , Medios de Comunicación Sociales , Red Social , Emociones , Humanos , Lenguaje
16.
Front Psychol ; 11: 502751, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33224043

RESUMEN

In reasoning about situations in which several causes lead to a common effect, a much studied and yet still not well-understood inference is that of explaining away. Assuming that the causes contribute independently to the effect, if we learn that the effect is present, then this increases the probability that one or more of the causes are present. But if we then learn that a particular cause is present, this cause "explains" the presence of the effect, and the probabilities of the other causes decrease again. People tend to show this explaining away effect in their probability judgments, but to a lesser extent than predicted by the causal structure of the situation. We investigated further the conditions under which explaining away is observed. Participants estimated the probability of a cause, given the presence or the absence of another cause, for situations in which the effect was either present or absent, and the evidence about the effect was either certain or uncertain. Responses were compared to predictions obtained using Bayesian network modeling as well as a sensitivity analysis of the size of normative changes in probability under different information conditions. One of the conditions investigated: when there is certainty that the effect is absent, is special because under the assumption of causal independence, the probabilities of the causes remain invariant, that is, there is no normative explaining away or augmentation. This condition is therefore especially diagnostic of people's reasoning about common-effect structures. The findings suggest that, alongside earlier explanations brought forward in the literature, explaining away may occur less often when the causes are assumed to interact in their contribution to the effect, and when the normative size of the probability change is not large enough to be subjectively meaningful. Further, people struggled when given evidence against negative evidence, resembling a double negation effect.

17.
Cogn Psychol ; 122: 101329, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32805584

RESUMEN

Conditionals and conditional reasoning have been a long-standing focus of research across a number of disciplines, ranging from psychology through linguistics to philosophy. But almost no work has concerned itself with the question of how hearing or reading a conditional changes our beliefs. Given that we acquire much-perhaps most-of what we believe through the testimony of others, the simple matter of acquiring conditionals via others' assertion of a conditional seems integral to any full understanding of the conditional and conditional reasoning. In this paper we detail a number of basic intuitions about how beliefs might change in response to a conditional being uttered, and show how these are backed by behavioral data. In the remainder of the paper, we then show how these deceptively simple phenomena pose a fundamental challenge to present theoretical accounts of the conditional and conditional reasoning - a challenge which no account presently fully meets.


Asunto(s)
Toma de Decisiones/fisiología , Lógica , Modelos Estadísticos , Teoría de la Probabilidad , Teorema de Bayes , Comprensión , Humanos
18.
Cognition ; 204: 104343, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32599310

RESUMEN

Whether assessing the accuracy of expert forecasting, the pros and cons of group communication, or the value of evidence in diagnostic or predictive reasoning, dependencies between experts, group members, or evidence have traditionally been seen as a form of redundancy. We demonstrate that this conception of dependence conflates the structure of a dependency network, and the observations across this network. By disentangling these two elements we show, via mathematical proof and specific examples, that there are cases where dependencies yield an informational advantage over independence. More precisely, when a structural dependency exists, but observations are either partial or contradicting, these observations provide more support to a hypothesis than when this structural dependency does not exist, ceteris paribus. Furthermore, we show that lay reasoners endorse sufficient assumptions underpinning these advantageous structures yet fail to appreciate their implications for probability judgments and belief revision.


Asunto(s)
Juicio , Solución de Problemas , Comunicación , Humanos , Probabilidad , Red Social
19.
J Exp Psychol Learn Mem Cogn ; 46(9): 1795-1805, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32437188

RESUMEN

In this article, we explore how people revise their belief in a hypothesis and the reliability of sources in circumstances where those sources are either independent or are partially dependent because of their shared, common background. Specifically, we examine people's revision of perceived source reliability by comparison with a formal model of reliability revision proposed by Bovens and Hartmann (2003). This model predicts a U-shaped trajectory for revision in certain circumstances: If a source provides a positive report for an unlikely hypothesis, perceived source reliability should decrease; as additional positive reports emerge, however, estimates of reliability should increase. Participants' updates in our experiment show this U-shaped pattern. Furthermore, participants' responses also respect a second feature of the model, namely that perceived reliability should once again decrease when it becomes known that the sources are partially dependent. Participants revise appropriately both when a specific shared reliability is observed (e.g., sources went to the same, low quality school) and when integrating the possibility of shared reliability. These findings shed light on how people gauge source reliability and integrate reports when multiple sources weigh in on an issue as seen in public debates. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Modelos Psicológicos , Pensamiento/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
20.
Front Psychol ; 11: 660, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32328015

RESUMEN

Bayesian reasoning and decision making is widely considered normative because it minimizes prediction error in a coherent way. However, it is often difficult to apply Bayesian principles to complex real world problems, which typically have many unknowns and interconnected variables. Bayesian network modeling techniques make it possible to model such problems and obtain precise predictions about the causal impact that changing the value of one variable may have on the values of other variables connected to it. But Bayesian modeling is itself complex, and has until now remained largely inaccessible to lay people. In a large scale lab experiment, we provide proof of principle that a Bayesian network modeling tool, adapted to provide basic training and guidance on the modeling process to beginners without requiring knowledge of the mathematical machinery working behind the scenes, significantly helps lay people find normative Bayesian solutions to complex problems, compared to generic training on probabilistic reasoning. We discuss the implications of this finding for the use of Bayesian network software tools in applied contexts such as security, medical, forensic, economic or environmental decision making.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA