Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Stud Hist Philos Sci ; 105: 85-98, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38754361

RESUMO

We argue that the societal consequences of the scientific realism debate, in the context of science-to-public communication are often overlooked and careful theorizing about it needs further empirical groundwork. As such, we conducted a survey experiment with 130 academics (from physics, chemistry, and biology) and 137 science communicators. We provided them with an 11-item questionnaire probing their views of scientific realism and related concepts. Contra theoretical expectations, we find that (a) science communicators are generally more inclined towards scientific antirealism when compared to scientists in the same academic fields, though both groups show an inclination towards realism and (b) academics who engage in more theoretical work are not less (or more) realist than experimentalists. Lastly, (c), we fail to find differences with respect to selective realism but find that science communicators are significantly less epistemically voluntarist compared to their academic counterparts. Overall, our results provide first empirical evidence on the views of scientists and science communicators on scientific realism, with some results running contra to the theoretical expectations, opening up new empirical and theoretical research directions.


Assuntos
Comunicação , Ciência , Inquéritos e Questionários , Humanos
2.
Behav Res Methods ; 2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38194165

RESUMO

We test whether large language models (LLMs) can be used to simulate human participants in social-science studies. To do this, we ran replications of 14 studies from the Many Labs 2 replication project with OpenAI's text-davinci-003 model, colloquially known as GPT-3.5. Based on our pre-registered analyses, we find that among the eight studies we could analyse, our GPT sample replicated 37.5% of the original results and 37.5% of the Many Labs 2 results. However, we were unable to analyse the remaining six studies due to an unexpected phenomenon we call the "correct answer" effect. Different runs of GPT-3.5 answered nuanced questions probing political orientation, economic preference, judgement, and moral philosophy with zero or near-zero variation in responses: with the supposedly "correct answer." In one exploratory follow-up study, we found that a "correct answer" was robust to changing the demographic details that precede the prompt. In another, we found that most but not all "correct answers" were robust to changing the order of answer choices. One of our most striking findings occurred in our replication of the Moral Foundations Theory survey results, where we found GPT-3.5 identifying as a political conservative in 99.6% of the cases, and as a liberal in 99.3% of the cases in the reverse-order condition. However, both self-reported 'GPT conservatives' and 'GPT liberals' showed right-leaning moral foundations. Our results cast doubts on the validity of using LLMs as a general replacement for human participants in the social sciences. Our results also raise concerns that a hypothetical AI-led future may be subject to a diminished diversity of thought.

3.
Exp Psychol ; 69(4): 226-239, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36475834

RESUMO

Over the past few decades, psychology and its cognate disciplines have undergone substantial scientific reform, ranging from advances in statistical methodology to significant changes in academic norms. One aspect of experimental design that has received comparatively little attention is incentivization, i.e., the way that participants are rewarded and incentivized monetarily for their participation in experiments and surveys. While incentive-compatible designs are the norm in disciplines like economics, the majority of studies in psychology and experimental philosophy are constructed such that individuals' incentives to maximize their payoffs in many cases stand opposed to their incentives to state their true preferences honestly. This is in part because the subject matter is often self-report data about subjective topics, and the sample is drawn from online platforms like Prolific or MTurk where many participants are out to make a quick buck. One mechanism that allows for the introduction of an incentive-compatible design in such circumstances is the Bayesian Truth Serum (BTS; Prelec, 2004), which rewards participants based on how surprisingly common their answers are. Recently, Schoenegger (2021) applied this mechanism in the context of Likert-scale self-reports, finding that the introduction of this mechanism significantly altered response behavior. In this registered report, we further investigate this mechanism by (1) attempting to directly replicate the previous result and (2) analyzing if the Bayesian Truth Serum's effect is distinct from the effects of its constituent parts (increase in expected earnings and addition of prediction tasks). We fail to find significant differences in response behavior between participants who were simply paid for completing the study and participants who were incentivized with the BTS. Per our pre-registration, we regard this as evidence in favor of a null effect of up to V = .1 and a failure to replicate but reserve judgment as to whether the BTS mechanism should be adopted in social science fields that rely heavily on Likert-scale items reporting subjective data, seeing that smaller effect sizes might still be of practical interest and results may differ for items different from the ones we studied. Further, we provide weak evidence that the prediction task itself influences response distributions and that this task's effect is distinct from an increase in expected earnings, suggesting a complex interaction between the BTS' constituent parts and its truth-telling instructions.


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes
4.
PLoS One ; 17(9): e0273971, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36137160

RESUMO

Charities differ, among other things, alongside the likelihood that their interventions succeed and produce the desired outcomes and alongside the extent that such likelihood can even be articulated numerically. In this paper, we investigate what best explains charitable giving behaviour regarding charities that have interventions that will succeed with a quantifiable and high probability (sure-thing charities) and charities that have interventions that only have a small and hard to quantify probability of bringing about the desired end (probabilistic charities). We study individual differences in risk/ambiguity attitudes, empathy, numeracy, optimism, and donor type (warm glow vs. pure altruistic donor type) as potential predictors of this choice. We conduct a money incentivised, pre-registered experiment on Prolific on a representative UK sample (n = 1,506) to investigate participant choices (i) between these two types of charities and (ii) about one randomly selected charity. Overall, we find little to no evidence that individual differences predict choices regarding decisions about sure-thing and probabilistic charities, with the exception that a purely altruistic donor type predicts donations to probabilistic charities when participants were presented with a randomly selected charity in (ii). Conducting exploratory equivalence tests, we find that the data provide robust evidence in favour of the absence of an effect (or a negligibly small effect) where we fail to reject the null. This is corroborated by exploratory Bayesian analyses. We take this paper to be contributing to the literature on charitable giving via this comprehensive null-result in pursuit of contributing to a cumulative science.


Assuntos
Instituições de Caridade , Individualidade , Altruísmo , Teorema de Bayes , Empatia , Humanos
5.
Am J Epidemiol ; 191(12): 2084-2097, 2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-35925053

RESUMO

We estimated the degree to which language used in the high-profile medical/public health/epidemiology literature implied causality using language linking exposures to outcomes and action recommendations; examined disconnects between language and recommendations; identified the most common linking phrases; and estimated how strongly linking phrases imply causality. We searched for and screened 1,170 articles from 18 high-profile journals (65 per journal) published from 2010-2019. Based on written framing and systematic guidance, 3 reviewers rated the degree of causality implied in abstracts and full text for exposure/outcome linking language and action recommendations. Reviewers rated the causal implication of exposure/outcome linking language as none (no causal implication) in 13.8%, weak in 34.2%, moderate in 33.2%, and strong in 18.7% of abstracts. The implied causality of action recommendations was higher than the implied causality of linking sentences for 44.5% or commensurate for 40.3% of articles. The most common linking word in abstracts was "associate" (45.7%). Reviewers' ratings of linking word roots were highly heterogeneous; over half of reviewers rated "association" as having at least some causal implication. This research undercuts the assumption that avoiding "causal" words leads to clarity of interpretation in medical research.


Assuntos
Pesquisa Biomédica , Idioma , Humanos , Causalidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA