Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 124
Filtrar
3.
PLoS One ; 10(6): e0127872, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26061881

RESUMO

In this project I investigate the use and possible misuse of p values in papers published in five (high-ranked) journals in experimental psychology. I use a data set of over 135'000 p values from more than five thousand papers. I inspect (1) the way in which the p values are reported and (2) their distribution. The main findings are following: first, it appears that some authors choose the mode of reporting their results in an arbitrary way. Moreover, they often end up doing it in such a way that makes their findings seem more statistically significant than they really are (which is well known to improve the chances for publication). Specifically, they frequently report p values "just above" significance thresholds directly, whereas other values are reported by means of inequalities (e.g. "p<.1"), they round the p values down more eagerly than up and appear to choose between the significance thresholds and between one- and two-sided tests only after seeing the data. Further, about 9.2% of reported p values are inconsistent with their underlying statistics (e.g. F or t) and it appears that there are "too many" "just significant" values. One interpretation of this is that researchers tend to choose the model or include/discard observations to bring the p value to the right side of the threshold.


Assuntos
Psicologia Experimental/estatística & dados numéricos , Viés de Publicação/estatística & dados numéricos , Interpretação Estatística de Dados , Humanos , Probabilidade , Psicologia Experimental/métodos , Publicações/estatística & dados numéricos
4.
Behav Res Methods ; 47(2): 355-60, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24788325

RESUMO

Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.


Assuntos
Modelos Psicológicos , Psicologia Experimental/métodos , Psicologia Experimental/estatística & dados numéricos , Tempo de Reação , Humanos
5.
Br J Math Stat Psychol ; 68(2): 220-45, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-24975402

RESUMO

In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number).


Assuntos
Teorema de Bayes , Teoria Psicológica , Psicologia Experimental/estatística & dados numéricos , Análise de Variância , Análise Fatorial , Feminino , Humanos , Liderança , Masculino , Probabilidade
6.
Psychol Rep ; 115(3): 741-7, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25457093

RESUMO

Social scientists are often interested in computing the proportion of overlap and nonoverlap between two normal distributions that are separated by some magnitude. In his popular book, Statistical Power Analysis for the Behavioral Sciences (1988, 2nd ed.), Jacob Cohen provided a table (Table 2.2.1) for determining such proportions from common values of separation. Unfortunately, Cohen's proportions are inconsistent with his explication of the popular index of effect size, d; and his proportions are underestimates of distributional overlap and overestimates of nonoverlap. The authors explain how Cohen derived his values and then provide a revised, corrected table of proportions that also match values presented elsewhere.


Assuntos
Distribuição Normal , Psicologia Experimental/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Animais , Comportamento Exploratório , Aprendizagem em Labirinto , Ratos , Retenção Psicológica
7.
J Appl Behav Anal ; 47(2): 380-403, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24817436

RESUMO

To study the influences between basic and applied research in behavior analysis, we analyzed the coauthorship interactions of authors who published in JABA and JEAB from 1980 to 2010. We paid particular attention to authors who published in both JABA and JEAB (dual authors) as potential agents of cross-field interactions. We present a comprehensive analysis of dual authors' coauthorship interactions using social networks methodology and key word analysis. The number of dual authors more than doubled (26 to 67) and their productivity tripled (7% to 26% of JABA and JEAB articles) between 1980 and 2010. Dual authors stood out in terms of number of collaborators, number of publications, and ability to interact with multiple groups within the field. The steady increase in JEAB and JABA interactions through coauthors and the increasing range of topics covered by dual authors provide a basis for optimism regarding the progressive integration of basic and applied behavior analysis.


Assuntos
Autoria , Pesquisa Comportamental/estatística & dados numéricos , Relações Interpessoais , Psicologia Experimental/estatística & dados numéricos , Editoração/estatística & dados numéricos , Humanos
8.
Psychon Bull Rev ; 21(6): 1415-30, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24841234

RESUMO

Loftus (Memory & Cognition 6:312-319, 1978) distinguished between interpretable and uninterpretable interactions. Uninterpretable interactions are ambiguous, because they may be due to two additive main effects (no interaction) and a nonlinear relationship between the (latent) outcome variable and its indicator. Interpretable interactions can only be due to the presence of a true interactive effect in the outcome variable, regardless of the relationship that it establishes with its indicator. In the present article, we first show that same problem can arise when an unmeasured mediator has a nonlinear effect on the measured outcome variable. Then we integrate Loftus's arguments with a seemingly contradictory approach to interactions suggested by Rosnow and Rosenthal (Psychological Bulletin 105:143-146, 1989). We show that entire data patterns, not just interaction effects alone, produce interpretable or noninterpretable interactions. Next, we show that the same problem of interpretability can apply to main effects. Lastly, we give concrete advice on what researchers can do to generate data patterns that provide unambiguous evidence for hypothesized interactions.


Assuntos
Análise de Variância , Modelos Psicológicos , Modelos Estatísticos , Psicologia Experimental/estatística & dados numéricos , Interpretação Estatística de Dados , Humanos
9.
Psychon Bull Rev ; 21(5): 1180-7, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24638826

RESUMO

Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009 and 2012. When empirical studies succeed at a rate much higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings have been suppressed, the experiments or analyses were improper, or the theory does not properly account for the data. In total, problems appeared for 82 % (36 out of 44) of the articles in Psychological Science that had four or more experiments and could be analyzed.


Assuntos
Publicações Periódicas como Assunto/estatística & dados numéricos , Psicologia Experimental , Estatística como Assunto/normas , Viés , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos , Psicologia Experimental/métodos , Psicologia Experimental/normas , Psicologia Experimental/estatística & dados numéricos , Tamanho da Amostra
10.
Br J Math Stat Psychol ; 67(3): 451-70, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24188158

RESUMO

Virtually all discussions and applications of statistical mediation analysis have been based on the condition that the independent variable is dichotomous or continuous, even though investigators frequently are interested in testing mediation hypotheses involving a multicategorical independent variable (such as two or more experimental conditions relative to a control group). We provide a tutorial illustrating an approach to estimation of and inference about direct, indirect, and total effects in statistical mediation analysis with a multicategorical independent variable. The approach is mathematically equivalent to analysis of (co)variance and reproduces the observed and adjusted group means while also generating effects having simple interpretations. Supplementary material available online includes extensions to this approach and Mplus, SPSS, and SAS code that implements it.


Assuntos
Análise de Variância , Modificador do Efeito Epidemiológico , Modelos Estatísticos , Psicologia Experimental/estatística & dados numéricos , Causalidade , Humanos , Funções Verossimilhança , Computação Matemática , Estatística como Assunto
11.
Br J Math Stat Psychol ; 67(3): 430-50, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24016181

RESUMO

In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process.


Assuntos
Aptidão , Modelos Psicológicos , Modelos Estatísticos , Testes Psicológicos/estatística & dados numéricos , Psicologia Experimental/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Humanos , Funções Verossimilhança , Análise de Regressão , Reprodutibilidade dos Testes , Estatística como Assunto
12.
Br J Math Stat Psychol ; 67(3): 388-407, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23992122

RESUMO

Latent trait models for responses and response times in tests often lack a substantial interpretation in terms of a cognitive process model. This is a drawback because process models are helpful in clarifying the meaning of the latent traits. In the present paper, a new model for responses and response times in tests is presented. The model is based on the proportional hazards model for competing risks. Two processes are assumed, one reflecting the increase in knowledge and the second the tendency to discontinue. The processes can be characterized by two proportional hazards models whose baseline hazard functions correspond to the temporary increase in knowledge and discouragement. The model can be calibrated with marginal maximum likelihood estimation and an application of the ECM algorithm. Two tests of model fit are proposed. The amenability of the proposed approaches to model calibration and model evaluation is demonstrated in a simulation study. Finally, the model is used for the analysis of two empirical data sets.


Assuntos
Avaliação Educacional/estatística & dados numéricos , Modelos Psicológicos , Modelos Estatísticos , Modelos de Riscos Proporcionais , Testes Psicológicos/estatística & dados numéricos , Psicologia Experimental/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Tempo de Reação , Algoritmos , Aprendizagem por Associação , Humanos , Funções Verossimilhança , Reconhecimento Visual de Modelos , Resolução de Problemas
13.
Br J Math Stat Psychol ; 67(3): 408-29, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24028625

RESUMO

The study explores the robustness to violations of normality and sphericity of linear mixed models when they are used with the Kenward-Roger procedure (KR) in split-plot designs in which the groups have different distributions and sample sizes are small. The focus is on examining the effect of skewness and kurtosis. To this end, a Monte Carlo simulation study was carried out, involving a split-plot design with three levels of the between-subjects grouping factor and four levels of the within-subjects factor. The results show that: (1) the violation of the sphericity assumption did not affect KR robustness when the assumption of normality was not fulfilled; (2) the robustness of the KR procedure decreased as skewness in the distributions increased, there being no strong effect of kurtosis; and (3) the type of pairing between kurtosis and group size was shown to be a relevant variable to consider when using this procedure, especially when pairing is positive (i.e., when the largest group is associated with the largest value of the kurtosis coefficient and the smallest group with its smallest value). The KR procedure can be a good option for analysing repeated-measures data when the groups have different distributions, provided the total sample sizes are 45 or larger and the data are not highly or extremely skewed.


Assuntos
Modelos Lineares , Psicologia Experimental/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Distribuições Estatísticas , Viés , Método de Monte Carlo , Distribuição Normal , Reprodutibilidade dos Testes , Tamanho da Amostra
14.
Behav Res Methods ; 46(2): 357-71, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24234337

RESUMO

Latent curve models (LCMs) have been used extensively to analyze longitudinal data. However, little is known about the power of LCMs to detect nonlinear trends when they are present in the data. For this study, we utilized simulated data to investigate the power of LCMs to detect the mean of the quadratic slope, Type I error rates, and rates of nonconvergence during the estimation of quadratic LCMs. Five factors were examined: the number of time points, growth magnitude, interindividual variability, sample size, and the R (2)s of the measured variables. The results showed that the empirical Type I error rates were close to the nominal value of 5 %. The empirical power to detect the mean of the quadratic slope was affected by the simulation factors. Finally, a substantial proportion of samples failed to converge under conditions of no to small variation in the quadratic factor, small sample sizes, and small R (2) of the repeated measures. In general, we recommended that quadratic LCMs be based on samples of (a) at least 250 but ideally 400, when four measurement points are available; (b) at least 100 but ideally 150, when six measurement points are available; (c) at least 50 but ideally 100, when ten measurement points are available.


Assuntos
Modelos Estatísticos , Psicologia Experimental/estatística & dados numéricos , Humanos , Método de Monte Carlo , Análise de Regressão , Projetos de Pesquisa , Tamanho da Amostra
15.
Br J Math Stat Psychol ; 67(3): 471-95, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24192201

RESUMO

The minimum-diameter partitioning problem (MDPP) seeks to produce compact clusters, as measured by an overall goodness-of-fit measure known as the partition diameter, which represents the maximum dissimilarity between any two objects placed in the same cluster. Complete-linkage hierarchical clustering is perhaps the best-known heuristic method for the MDPP and has an extensive history of applications in psychological research. Unfortunately, this method has several inherent shortcomings that impede the model selection process, such as: (1) sensitivity to the input order of the objects, (2) failure to obtain a globally optimal minimum-diameter partition when cutting the tree at K clusters, and (3) the propensity for a large number of alternative minimum-diameter partitions for a given K. We propose that each of these problems can be addressed by applying an algorithm that finds all of the minimum-diameter partitions for different values of K. Model selection is then facilitated by considering, for each value of K, the reduction in the partition diameter, the number of alternative optima, and the partition agreement among the alternative optima. Using five examples from the empirical literature, we show the practical value of the proposed process for facilitating model selection for the MDPP.


Assuntos
Análise por Conglomerados , Interpretação Estatística de Dados , Modelos Estatísticos , Testes Psicológicos/estatística & dados numéricos , Psicologia Experimental/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Algoritmos , Humanos , Testes de Personalidade/estatística & dados numéricos , Estatística como Assunto , Orientação Vocacional/estatística & dados numéricos
16.
J Exp Psychol Appl ; 19(4): 285-6, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24341316

RESUMO

In this introduction to the December 2013 issue of the Journal of Experimental Psychology: Applied, the editor discusses her goals to get the Journal back on track. She gives thanks for the research that continues to advance both science and practice in experimental psychology.


Assuntos
Publicações Periódicas como Assunto , Psicologia Experimental , Publicações Periódicas como Assunto/estatística & dados numéricos , Psicologia Experimental/estatística & dados numéricos
17.
Behav Res Methods ; 44(3): 644-55, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22806707

RESUMO

State-trace analysis (Bamber, Journal of Mathematical Psychology, 19, 137-181, 1979) is a graphical analysis that can determine whether one or more than one latent variable mediates an apparent dissociation between the effects of two experimental manipulations. State-trace analysis makes only ordinal assumptions and so, is not confounded by range effects that plague alternative methods, especially when performance is measured on a bounded scale (such as accuracy). We describe and illustrate the application of a freely available GUI driven package, StateTrace, for the R language. StateTrace automates many aspects of a state-trace analysis of accuracy and other binary response data, including customizable graphics and the efficient management of computationally intensive Bayesian methods for quantifying evidence about the outcomes of a state-trace experiment, developed by Prince, Brown, and Heathcote (Psychological Methods, 17, 78-99, 2012).


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Computação Matemática , Neurociências/estatística & dados numéricos , Linguagens de Programação , Psicologia Experimental/estatística & dados numéricos , Software , Gráficos por Computador , Humanos
18.
Psychon Bull Rev ; 19(3): 395-404, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22441956

RESUMO

Repeated measures designs are common in experimental psychology. Because of the correlational structure in these designs, the calculation and interpretation of confidence intervals is nontrivial. One solution was provided by Loftus and Masson (Psychonomic Bulletin & Review 1:476-490, 1994). This solution, although widely adopted, has the limitation of implying same-size confidence intervals for all factor levels, and therefore does not allow for the assessment of variance homogeneity assumptions (i.e., the circularity assumption, which is crucial for the repeated measures ANOVA). This limitation and the method's perceived complexity have sometimes led scientists to use a simplified variant, based on a per-subject normalization of the data (Bakeman & McArthur, Behavior Research Methods, Instruments, & Computers 28:584-589, 1996; Cousineau, Tutorials in Quantitative Methods for Psychology 1:42-45, 2005; Morey, Tutorials in Quantitative Methods for Psychology 4:61-64, 2008; Morrison & Weaver, Behavior Research Methods, Instruments, & Computers 27:52-56, 1995). We show that this normalization method leads to biased results and is uninformative with regard to circularity. Instead, we provide a simple, intuitive generalization of the Loftus and Masson method that allows for assessment of the circularity assumption.


Assuntos
Viés , Intervalos de Confiança , Psicologia Experimental/métodos , Projetos de Pesquisa , Humanos , Psicologia Experimental/normas , Psicologia Experimental/estatística & dados numéricos
19.
Behav Res Methods ; 44(3): 675-705, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22351612

RESUMO

The characteristics of the stimuli used in an experiment critically determine the theoretical questions the experiment can address. Yet there is relatively little methodological support for selecting optimal sets of items, and most researchers still carry out this process by hand. In this research, we present SOS, an algorithm and software package for the stochastic optimization of stimuli. SOS takes its inspiration from a simple manual stimulus selection heuristic that has been formalized and refined as a stochastic relaxation search. The algorithm rapidly and reliably selects a subset of possible stimuli that optimally satisfy the constraints imposed by an experimenter. This allows the experimenter to focus on selecting an optimization problem that suits his or her theoretical question and to avoid the tedious task of manually selecting stimuli. We detail how this optimization algorithm, combined with a vocabulary of constraints that define optimal sets, allows for the quick and rigorous assessment and maximization of the internal and external validity of experimental items. In doing so, the algorithm facilitates research using factorial, multiple/mixed-effects regression, and other experimental designs. We demonstrate the use of SOS with a case study and discuss other research situations that could benefit from this tool. Support for the generality of the algorithm is demonstrated through Monte Carlo simulations on a range of optimization problems faced by psychologists. The software implementation of SOS and a user manual are provided free of charge for academic purposes as precompiled binaries and MATLAB source files at http://sos.cnbc.cmu.edu.


Assuntos
Algoritmos , Psicologia Experimental/estatística & dados numéricos , Software , Processos Estocásticos , Humanos , Método de Monte Carlo , Análise de Regressão , Reprodutibilidade dos Testes , Projetos de Pesquisa
20.
Behav Res Methods ; 44(3): 806-44, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22311738

RESUMO

Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.


Assuntos
Intervalos de Confiança , Interpretação Estatística de Dados , Modelos Estatísticos , Método de Monte Carlo , Psicologia Experimental/estatística & dados numéricos , Viés , Humanos , Computação Matemática , Análise de Regressão , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA