Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Cognition ; 240: 105586, 2023 11.
Article in English | MEDLINE | ID: mdl-37595514

ABSTRACT

Providing an explanation is a communicative act. It involves an explainee, a person who receives an explanation, and an explainer, a person (or sometimes a machine) who provides an explanation. The majority of research on explanation has focused on how explanations alter explainees' beliefs. However, one general feature of communicative acts is that they also provide information about the speaker (explainer). Work on argumentation suggests that the speaker's reliability interacts with the content of the speaker's message and has a significant impact on argument strength. In five experiments we explore the interplay between explanation, the explainee's confidence in what is being explained, and the explainer's reliability. Experiment 1 replicates results from previous literature on the impact of explanations on an explainee's confidence in what is being explained using real-world explanations. Experiments 2 and 3 show that providing an explanation not only impacts the explainee's confidence about what is being explained but also influences beliefs about the reliability of the explainer. Additionally, the two experiments demonstrate that the impact of explanation on the explainee's confidence is mediated by the reliability of the explainer. In Experiment 4, we experimentally manipulated the explainer's reliability and found that both the explainer's reliability and whether or not an explanation was provided have a significant effect on the explainee's confidence in what is being explained. In Experiment 5, we observed an interaction between providing an explanation and the explainer's reliability. Specifically, we found that providing an explanation has a significantly greater impact on the explainee's confidence in what is being explained when the explainer's reliability is low compared to when that reliability is high. Throughout the study we point to the important impact of background knowledge, warranting further studies on this matter.


Subject(s)
Communication , Knowledge , Humans , Reproducibility of Results
2.
Philos Trans A Math Phys Eng Sci ; 381(2251): 20220043, 2023 Jul 24.
Article in English | MEDLINE | ID: mdl-37271178

ABSTRACT

In this paper, we bring together two closely related, but distinct, notions: argument and explanation. We clarify their relationship. We then provide an integrative review of relevant research on these notions, drawn both from the cognitive science and the artificial intelligence (AI) literatures. We then use this material to identify key directions for future research, indicating areas where bringing together cognitive science and AI perspectives would be mutually beneficial. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.

3.
Patterns (N Y) ; 3(12): 100635, 2022 Dec 09.
Article in English | MEDLINE | ID: mdl-36569554

ABSTRACT

Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)-both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express causal relationships. Most AI systems, however, are only able to capture associations or correlations in data, so interpreting them as casual would not be justified. In this perspective, we present two experiments (total n = 364) exploring the effects of CF explanations of AI systems' predictions on lay people's causal beliefs about the real world. In Experiment 1, we found that providing CF explanations of an AI system's predictions does indeed (unjustifiably) affect people's causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people's causal beliefs.

4.
Cogn Psychol ; 121: 101293, 2020 09.
Article in English | MEDLINE | ID: mdl-32388007

ABSTRACT

Causal judgements in explaining-away situations, where multiple independent causes compete to account for a common effect, are ubiquitous in both everyday and specialised contexts. Despite their ubiquity, cognitive psychologists still struggle to understand how people reason in these contexts. Empirical studies have repeatedly found that people tend to 'insufficiently' explain away: that is, when one cause explains the presence of an effect, people do not sufficiently reduce the probability of other competing causes. However, the diverse accounts that researchers have proposed to explain this insufficiency suggest we are yet to find a compelling account of these results. In the current research we explored the novel possibility that insufficiency in explaining away is driven by: (i) some people interpreting probabilities as propensities, i.e. as tendencies of a physical system to produce an outcome and (ii) some people splitting the probability space among the causes in diagnostic reasoning, i.e. by following a strategy we call 'the diagnostic split'. We tested these two hypotheses by manipulating (a) the characteristics of cover stories to reflect different degrees to which the propensity interpretation of probability was pronounced, and (b) the prior probabilities of the causes which entailed different normative amounts of explaining away. Our results were in line with the extant literature as we found insufficient explaining away. However, we also found empirical support for our two hypotheses, suggesting that they are a driving force behind the reported insufficiency.


Subject(s)
Judgment , Probability , Adult , Bayes Theorem , Female , Humans , Male , Models, Psychological
5.
Front Psychol ; 11: 660, 2020.
Article in English | MEDLINE | ID: mdl-32328015

ABSTRACT

Bayesian reasoning and decision making is widely considered normative because it minimizes prediction error in a coherent way. However, it is often difficult to apply Bayesian principles to complex real world problems, which typically have many unknowns and interconnected variables. Bayesian network modeling techniques make it possible to model such problems and obtain precise predictions about the causal impact that changing the value of one variable may have on the values of other variables connected to it. But Bayesian modeling is itself complex, and has until now remained largely inaccessible to lay people. In a large scale lab experiment, we provide proof of principle that a Bayesian network modeling tool, adapted to provide basic training and guidance on the modeling process to beginners without requiring knowledge of the mathematical machinery working behind the scenes, significantly helps lay people find normative Bayesian solutions to complex problems, compared to generic training on probabilistic reasoning. We discuss the implications of this finding for the use of Bayesian network software tools in applied contexts such as security, medical, forensic, economic or environmental decision making.

SELECTION OF CITATIONS
SEARCH DETAIL
...