Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Child Dev ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38925560

RESUMO

The current study is the first to document the real-time association between phone use and speech to infants in extended real-world interactions. N= 16 predominantly White (75%) mother-infant dyads (infants aged M = 4.1 months, SD = 2.3; 63% female) shared 16,673 min of synchronized real-world phone use and Language Environment Analysis audio data over the course of 1 week (collected 2017-2020) for our analyses. Maternal phone use was associated with a 16% decrease in infants' speech input, with shorter intervals of phone use (1-2 min) associated with a greater 26% decrease in speech input relative to longer periods. This work highlights the value of multimodal sensing to access dynamic, within-person, and context-specific predictors of speech to infants in real-world settings.

2.
Psychol Methods ; 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780591

RESUMO

The Bayesian highest-density interval plus region of practical equivalence (HDI + ROPE) decision rule is an increasingly common approach to testing null parameter values. The decision procedure involves a comparison between a posterior highest-density interval (HDI) and a prespecified region of practical equivalence. One then accepts or rejects the null parameter value depending on the overlap (or lack thereof) between these intervals. Here, we demonstrate, both theoretically and through examples, that this procedure is logically incoherent. Because the HDI is not transformation invariant, the ultimate inferential decision depends on statistically arbitrary and scientifically irrelevant properties of the statistical model. The incoherence arises from a common confusion between probability density and probability proper. The HDI + ROPE procedure relies on characterizing posterior densities as opposed to being based directly on probability. We conclude with recommendations for alternative Bayesian testing procedures that do not exhibit this pathology and provide a "quick fix" in the form of quantile intervals. This article is the work of the authors and is reformatted from the original, which was published under a CC-BY Attribution 4.0 International license and is available at https://psyarxiv.com/5p2qt/. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

3.
Psychol Rev ; 130(4): 853-872, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36892901

RESUMO

Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback-Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Memória , Modelos Psicológicos , Humanos , Probabilidade
4.
Stat Med ; 41(8): 1319-1333, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-34897784

RESUMO

Testing the equality of two proportions is a common procedure in science, especially in medicine and public health. In these domains, it is crucial to be able to quantify evidence for the absence of a treatment effect. Bayesian hypothesis testing by means of the Bayes factor provides one avenue to do so, requiring the specification of prior distributions for parameters. The most popular analysis approach views the comparison of proportions from a contingency table perspective, assigning prior distributions directly to the two proportions. Another, less popular approach views the problem from a logistic regression perspective, assigning prior distributions to logit-transformed parameters. Reanalyzing 39 null results from the New England Journal of Medicine with both approaches, we find that they can lead to markedly different conclusions, especially when the observed proportions are at the extremes (ie, very low or very high). We explain these stark differences and provide recommendations for researchers interested in testing the equality of two proportions and users of Bayes factors more generally. The test that assigns prior distributions to logit-transformed parameters creates prior dependence between the two proportions and yields weaker evidence when the observations are at the extremes. When comparing two proportions, we argue that this test should become the new default.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Humanos , Modelos Logísticos
5.
Psychon Bull Rev ; 28(3): 813-826, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33037582

RESUMO

Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.


Assuntos
Interpretação Estatística de Dados , Guias como Assunto , Modelos Estatísticos , Projetos de Pesquisa , Teorema de Bayes , Humanos
6.
Behav Res Methods ; 51(6): 2498-2508, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-30105445

RESUMO

We describe a general method that allows experimenters to quantify the evidence from the data of a direct replication attempt given data already acquired from an original study. These so-called replication Bayes factors are a reconceptualization of the ones introduced by Verhagen and Wagenmakers (Journal of Experimental Psychology: General, 143(4), 1457-1475 2014) for the common t test. This reconceptualization is computationally simpler and generalizes easily to most common experimental designs for which Bayes factors are available.


Assuntos
Teorema de Bayes , Projetos de Pesquisa/estatística & dados numéricos , Interpretação Estatística de Dados , Humanos
7.
Psychon Bull Rev ; 25(1): 5-34, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28378250

RESUMO

We introduce the fundamental tenets of Bayesian inference, which derive from two basic laws of probability theory. We cover the interpretation of probabilities, discrete and continuous versions of Bayes' rule, parameter estimation, and model comparison. Using seven worked examples, we illustrate these principles and set up some of the technical background for the rest of this special issue of Psychonomic Bulletin & Review. Supplemental material is available via https://osf.io/wskex/ .


Assuntos
Teorema de Bayes , Teoria da Probabilidade , Psicologia , Humanos
8.
Behav Brain Sci ; 41: e157, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-31064530

RESUMO

The commentaries on our target article are insightful and constructive. There were some critical notes, but many commentaries agreed with, or even amplified our message. The first section of our response addresses comments pertaining to specific parts of the target article. The second section provides a response to the commentaries' suggestions to make replication mainstream. The final section contains concluding remarks.


Assuntos
Ciências do Comportamento , Resolução de Problemas
9.
Psychon Bull Rev ; 25(1): 58-76, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28685272

RESUMO

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.


Assuntos
Teorema de Bayes , Psicologia , Software , Humanos , Projetos de Pesquisa
10.
Psychon Bull Rev ; 25(1): 219-234, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28660424

RESUMO

In this guide, we present a reading list to serve as a concise introduction to Bayesian data analysis. The introduction is geared toward reviewers, editors, and interested researchers who are new to Bayesian statistics. We provide commentary for eight recommended sources, which together cover the theoretical and practical cornerstones of Bayesian statistics in psychology and related sciences. The resources are presented in an incremental order, starting with theoretical foundations and moving on to applied issues. In addition, we outline an additional 32 articles and books that can be consulted to gain background knowledge about various theoretical specifics and Bayesian approaches to frequently used models. Our goal is to offer researchers a starting point for understanding the core tenets of Bayesian analysis, while requiring a low level of time commitment. After consulting our guide, the reader should understand how and why Bayesian methods work, and feel able to evaluate their use in the behavioral and social sciences.


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Humanos , Pesquisadores
11.
Soc Psychol Personal Sci ; 8(8): 875-881, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29276574

RESUMO

Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or "too good to be true") that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or "too true to be bad." As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature.

12.
Behav Brain Sci ; 41: e120, 2017 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-29065933

RESUMO

Many philosophers of science and methodologists have argued that the ability to repeat studies and obtain similar results is an essential component of science. A finding is elevated from single observation to scientific evidence when the procedures that were used to obtain it can be reproduced and the finding itself can be replicated. Recent replication attempts show that some high profile results - most notably in psychology, but in many other disciplines as well - cannot be replicated consistently. These replication attempts have generated a considerable amount of controversy, and the issue of whether direct replications have value has, in particular, proven to be contentious. However, much of this discussion has occurred in published commentaries and social media outlets, resulting in a fragmented discourse. To address the need for an integrative summary, we review various types of replication studies and then discuss the most commonly voiced concerns about direct replication. We provide detailed responses to these concerns and consider different statistical ways to evaluate replications. We conclude there are no theoretical or statistical obstacles to making direct replication a routine aspect of psychological science.

13.
F1000Res ; 5: 1778, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27606051

RESUMO

In their 2015 paper, Thorstenson, Pazda, and Elliot offered evidence from two experiments that perception of colors on the blue-yellow axis was impaired if the participants had watched a sad movie clip, compared to participants who watched clips designed to induce a happy or neutral mood. Subsequently, these authors retracted their article, citing a mistake in their statistical analyses and a problem with the data in one of their experiments. Here, we discuss a number of other methodological problems with Thorstenson et al.'s experimental design, and also demonstrate that the problems with the data go beyond what these authors reported. We conclude that repeating one of the two experiments, with the minor revisions proposed by Thorstenson et al., will not be sufficient to address the problems with this work.

14.
PLoS One ; 11(2): e0149794, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26919473

RESUMO

We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors-a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis-for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.


Assuntos
Teorema de Bayes , Psicologia , Viés de Publicação , Projetos de Pesquisa , Humanos , Reprodutibilidade dos Testes , Tamanho da Amostra
15.
Front Psychol ; 6: 1410, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26441782

RESUMO

Children are exceptional, even 'super,' imitators but comparatively poor independent problem-solvers or innovators. Yet, imitation and innovation are both necessary components of cumulative cultural evolution. Here, we explored the relationship between imitation and innovation by assessing children's ability to generate a solution to a novel problem by imitating two different action sequences demonstrated by two different models, an example of imitation by combination, which we refer to as "summative imitation." Children (N = 181) from 3 to 5 years of age and across three experiments were tested in a baseline condition or in one of six demonstration conditions, varying in the number of models and opening techniques demonstrated. Across experiments, more than 75% of children evidenced summative imitation, opening both compartments of the problem box and retrieving the reward hidden in each. Generally, learning different actions from two different models was as good (and in some cases, better) than learning from 1 model, but the underlying representations appear to be the same in both demonstration conditions. These results show that summative imitation not only facilitates imitation learning but can also result in new solutions to problems, an essential feature of innovation and cumulative culture.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA