Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
5.
J Exp Psychol Gen ; 148(4): 688-712, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30973262

RESUMEN

Research on money priming typically investigates whether exposure to money-related stimuli can affect people's thoughts, feelings, motivations, and behaviors (for a review, see Vohs, 2015). Our study answers the call for a comprehensive meta-analysis examining the available evidence on money priming (Vadillo, Hardwicke, & Shanks, 2016). By conducting a systematic search of published and unpublished literature on money priming, we sought to achieve three key goals. First, we aimed to assess the presence of biases in the available published literature (e.g., publication bias). Second, in the case of such biases, we sought to derive a more accurate estimate of the effect size after correcting for these biases. Third, we aimed to investigate whether design factors such as prime type and study setting moderated the money priming effects. Our overall meta-analysis included 246 suitable experiments and showed a significant overall effect size estimate (Hedges' g = .31, 95% CI [0.26, 0.36]). However, publication bias and related biases are likely given the asymmetric funnel plots, Egger's test and two other tests for publication bias. Moderator analyses offered insight into the variation of the money priming effect, suggesting for various types of study designs whether the effect was present, absent, or biased. We found the largest money priming effect in lab studies investigating a behavioral dependent measure using a priming technique in which participants actively handled money. Future research should use sufficiently powerful preregistered studies to replicate these findings. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Emociones , Motivación , Humanos , Sesgo de Publicación , Proyectos de Investigación
6.
PLoS One ; 14(4): e0215052, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30978228

RESUMEN

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.


Asunto(s)
Interpretación Estadística de Datos , Medicina , Psicología , Sesgo de Publicación , Bases de Datos Factuales , Humanos , Método de Montecarlo , Control de Calidad , Sesgo de Selección
7.
Multivariate Behav Res ; 54(5): 637-665, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30977400

RESUMEN

Several approaches exist to model interactions between latent variables. However, it is unclear how these perform when item scores are skewed and ordinal. Research on Type D personality serves as a good case study for that matter. In Study 1, we fitted a multivariate interaction model to predict depression and anxiety with Type D personality, operationalized as an interaction between its two subcomponents negative affectivity (NA) and social inhibition (SI). We constructed this interaction according to four approaches: (1) sum score product; (2) single product indicator; (3) matched product indicators; and (4) latent moderated structural equations (LMS). In Study 2, we compared these interaction models in a simulation study by assessing for each method the bias and precision of the estimated interaction effect under varying conditions. In Study 1, all methods showed a significant Type D effect on both depression and anxiety, although this effect diminished after including the NA and SI quadratic effects. Study 2 showed that the LMS approach performed best with respect to minimizing bias and maximizing power, even when item scores were ordinal and skewed. However, when latent traits were skewed LMS resulted in more false-positive conclusions, while the Matched PI approach adequately controlled the false-positive rate.


Asunto(s)
Ansiedad/epidemiología , Depresión/epidemiología , Análisis de Clases Latentes , Personalidad Tipo D , Simulación por Computador , Humanos , Relaciones Interpersonales , Método de Montecarlo , Análisis Multivariante , Escalas de Valoración Psiquiátrica , Conducta Social
8.
J Biosoc Sci ; 50(6): 872-874, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30015605

RESUMEN

In their response to my criticism of their recent article in Journal of Biosocial Science (te Nijenhuis et al., 2017), te Nijenhuis and van den Hoek (2018) raise four points none of which concerns my main point that the method of correlated vectors (MCV) applied to item-level data represents a flawed method. Here, I discuss te Nijenhuis and van den Hoek's four points. First, I argue that my previous application of MCV to item-level data showed that the method can yield nonsensical results. Second, I note that meta-analytic corrections for sampling error, imperfect measures, restriction of range and unreliability of the vectors are futile and cannot help fix the method. Third, I note that even with perfect data, the method can yield negative correlations. Fourth, I highlight the irrelevance of te Nijenhuis and van den Hoek (2018)'s point that my comment had not been published in a peerreviewed journal by referring to my articles in 2009 and 2017 on MCV in peer-reviewed journals.


Asunto(s)
Adolescente , Niño , Humanos , Arabia Saudita
9.
J Biosoc Sci ; 50(6): 868-869, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30015606

RESUMEN

In a recent study, te Nijenhuis et al. (2017) used a version of Jensen's method of correlated vectors to study the nature of ethnic group differences on Raven's Progressive Matrices test. In this comment, the author points out that this method has been shown to be psychometrically inappropriate in studying group differences in performance on dichotomous (correctly or incorrectly scored) items. Specifically, the method uses item statistics like the item-total correlation that necessarily differ across groups differing in ability and employs a linear model to test inherent non-linear relations. Wicherts (2017) showed that this method can provide correlations far exceeding r=0.44 in cases where the group differences cannot possibly be on g because the items measure different traits across the groups. The psychometric problems with their method cast serious doubts on te Nijenhuis et al.'s conclusions concerning the role of g in the studied group difference in cognitive test performance.


Asunto(s)
Cognición , Adolescente , Niño , Humanos , Pruebas de Inteligencia , Psicometría , Arabia Saudita
10.
Behav Brain Sci ; 41: e143, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-31064583

RESUMEN

In determining the need to directly replicate, it is crucial to first verify the original results through independent reanalysis of the data. Original results that appear erroneous and that cannot be reproduced by reanalysis offer little evidence to begin with, thereby diminishing the need to replicate. Sharing data and scripts is essential to ensure reproducibility.


Asunto(s)
Proyectos de Investigación , Reproducibilidad de los Resultados
11.
Animals (Basel) ; 7(12)2017 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-29186879

RESUMEN

In this review, the author discusses several of the weak spots in contemporary science, including scientific misconduct, the problems of post hoc hypothesizing (HARKing), outcome switching, theoretical bloopers in formulating research questions and hypotheses, selective reading of the literature, selective citing of previous results, improper blinding and other design failures, p-hacking or researchers' tendency to analyze data in many different ways to find positive (typically significant) results, errors and biases in the reporting of results, and publication bias. The author presents some empirical results highlighting problems that lower the trustworthiness of reported results in scientific literatures, including that of animal welfare studies. Some of the underlying causes of these biases are discussed based on the notion that researchers are only human and hence are not immune to confirmation bias, hindsight bias, and minor ethical transgressions. The author discusses solutions in the form of enhanced transparency, sharing of data and materials, (post-publication) peer review, pre-registration, registered reports, improved training, reporting guidelines, replication, dealing with publication bias, alternative inferential techniques, power, and other statistical tools.

12.
Account Res ; 24(8): 458-468, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29140742

RESUMEN

The syntax or codes used to fit Structural Equation Models (SEMs) convey valuable information on model specifications and the manner in which SEMs are estimated. We requested SEM syntaxes from a random sample of 229 articles (published in 1998-2013) that ran SEMs using LISREL, AMOS, or Mplus. After exchanging over 500 emails, we ended up obtaining a meagre 57 syntaxes used in these articles (24.9% of syntaxes we requested). Results considering the 129 (corresponding) authors who replied to our request showed that the odds of the syntax being lost increased by 21% per year passed since publication of the article, while the odds of actually obtaining a syntax dropped by 13% per year. So SEM syntaxes that are crucial for reproducibility and for correcting errors in the running and reporting of SEMs are often unavailable and get lost rapidly. The preferred solution is mandatory sharing of SEM syntaxes alongside articles or in data repositories.


Asunto(s)
Algoritmos , Difusión de la Información , Modelos Teóricos , Conducta Cooperativa , Humanos , Reproducibilidad de los Resultados
13.
PLoS One ; 12(3): e0172792, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28296929

RESUMEN

A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants' estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.


Asunto(s)
Psicología , Humanos , Italia , Proyectos de Investigación , Estados Unidos
14.
Account Res ; 24(3): 127-151, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28001440

RESUMEN

Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the "storybook image of the scientist" is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than to other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and Ph.D. students, and higher levels to Ph.D. students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one's own group than to people in other groups may decrease scientists' willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science.


Asunto(s)
Opinión Pública , Ciencia , Femenino , Humanos , Masculino , Estudiantes , Estados Unidos
15.
Front Psychol ; 7: 1832, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27933012

RESUMEN

The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.

16.
Perspect Psychol Sci ; 11(5): 713-729, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27694466

RESUMEN

Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform.


Asunto(s)
Interpretación Estadística de Datos , Metaanálisis como Asunto , Modelos Estadísticos , Humanos , Internet , Psicología/métodos , Sesgo de Publicación , Programas Informáticos
17.
PLoS One ; 11(9): e0163251, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27684371

RESUMEN

BACKGROUND: Personality influences decision making and ethical considerations. Its influence on the occurrence of research misbehavior has never been studied. This study aims to determine the association between personality traits and self-reported questionable research practices and research misconduct. We hypothesized that narcissistic, Machiavellianistic and psychopathic traits as well as self-esteem are associated with research misbehavior. METHODS: Included in this cross-sectional study design were 535 Dutch biomedical scientists (response rate 65%) from all hierarchical layers of 4 university medical centers in the Netherlands. We used validated personality questionnaires such as the Dark Triad (narcissism, psychopathy, and Machiavellianism), Rosenberg's Self-Esteem Scale, the Publication Pressure Questionnaire (PPQ), and also demographic and job-specific characteristics to investigate the association of personality traits with a composite research misbehavior severity score. FINDINGS: Machiavellianism was positively associated (beta 1.28, CI 1.06-1.53) with self-reported research misbehavior, while narcissism, psychopathy and self-esteem were not. Exploratory analysis revealed that narcissism and research misconduct were more severe among persons in higher academic ranks (i.e., professors) (p<0.01 and p<0.001, respectively), and self-esteem scores and publication pressure were lower (p<0.001 and p<0.01, respectively) as compared to postgraduate PhD fellows. CONCLUSIONS: Machiavellianism may be a risk factor for research misbehaviour. Narcissism and research misbehaviour were more prevalent among biomedical scientists in higher academic positions. These results suggest that personality has an impact on research behavior and should be taken into account in fostering responsible conduct of research.

18.
Clin Neuropsychol ; 30(7): 1006-16, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27356958

RESUMEN

OBJECTIVE: Neurocognitive test batteries such as recent editions of the Wechsler's Adult Intelligence Scale (WAIS-III/WAIS-IV) typically use nation-level population-based norms. The question is whether these batteries function in the same manner across different subgroups based on gender, age, educational background, socioeconomic status, ethnicity, mother tongue, or race. Here, the author argues that measurement invariance is a core issue in determining whether population-based norms are valid for different subgroups. METHOD: The author introduces measurement invariance, argues why it is an important topic of study, discusses why invariance might fail in cognitive ability testing, and reviews a dozen studies of invariance of commonly used neurocognitive test batteries. RESULTS: In over half of the reviewed studies, IQ batteries were not found to be measurement invariant across groups based on ethnicity, gender, educational background, cohort, or age. Apart from age and cohort, test manuals do not take such lack of invariance into account in computing full-scale IQ scores or normed domain scores. CONCLUSIONS: Measurement invariance is crucial for valid use of neurocognitive tests in clinical, educational, and professional practice. The appropriateness of population-based norms to particular subgroups should depend also on whether measurement invariance holds with respect to important subgroups.


Asunto(s)
Cognición , Pruebas de Inteligencia/normas , Pruebas Neuropsicológicas/normas , Adulto , Grupos Étnicos/psicología , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Escalas de Wechsler/normas
19.
Psychol Sci ; 27(8): 1069-77, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27354203

RESUMEN

Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies.


Asunto(s)
Intuición/fisiología , Psicología , Investigadores/psicología , Humanos , Conocimiento , Investigación , Proyectos de Investigación , Tamaño de la Muestra , Autoinforme , Encuestas y Cuestionarios
20.
PeerJ ; 4: e1935, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27077017

RESUMEN

Previous studies provided mixed findings on pecularities in p-value distributions in psychology. This paper examined 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of p-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time. We indeed found evidence for a bump just below .05 in the distribution of exactly reported p-values in the journals Developmental Psychology, Journal of Applied Psychology, and Journal of Personality and Social Psychology, but the bump did not increase over the years and disappeared when using recalculated p-values. We found clear and direct evidence for the QRP "incorrect rounding of p-value" (John, Loewenstein & Prelec, 2012) in all psychology journals. Finally, we also investigated monotonic excess of p-values, an effect of certain QRPs that has been neglected in previous research, and developed two measures to detect this by modeling the distributions of statistically significant p-values. Using simulations and applying the two measures to the retrieved test results, we argue that, although one of the measures suggests the use of QRPs in psychology, it is difficult to draw general conclusions concerning QRPs based on modeling of p-value distributions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA