Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
PLoS Biol ; 18(12): e3000937, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33296358

RESUMO

Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.


Assuntos
Coleta de Dados/métodos , Projetos de Pesquisa/estatística & dados numéricos , Coleta de Dados/normas , Coleta de Dados/tendências , Humanos , Controle de Qualidade , Sistema de Registros/estatística & dados numéricos , Projetos de Pesquisa/tendências
2.
Lancet ; 388(10039): 26, 2016 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-27397785
3.
Dev Cogn Neurosci ; 45: 100834, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32906086

RESUMO

The YOUth cohort study aims to be a trailblazer for open science. Being a large-scale, longitudinal cohort following children in their development from gestation until early adulthood, YOUth collects a vast amount of data through a variety of research techniques. Data are collected through multiple platforms, including facilities managed by Utrecht University and the University Medical Center Utrecht. In order to facilitate appropriate use of its data by research organizations and researchers, YOUth aims to produce high-quality, FAIR data while safeguarding the privacy of participants. This requires an extensive data infrastructure, set up by collaborative efforts of researchers, data managers, IT departments, and the Utrecht University Library. In the spirit of open science, YOUth will share its experience and expertise in setting up a high-quality research data infrastructure for sensitive cohort data. This paper describes the technical aspects of our data and data infrastructure, and the steps taken throughout the study to produce and safely store FAIR and high-quality data. Finally, we will reflect on the organizational aspects that are conducive to the success of setting up such an enterprise, and we consider the financial challenges posed by individual studies investing in sustainable science.


Assuntos
Gerenciamento de Dados/métodos , Projetos de Pesquisa/normas , Adolescente , Criança , Pré-Escolar , Estudos de Coortes , Feminino , Humanos , Lactente , Recém-Nascido , Estudos Longitudinais , Masculino
4.
PLoS One ; 15(7): e0236079, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32735597

RESUMO

In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of pre-registrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences, which both include a recommendation to do a formal power analysis, and (ii) a sample of pre-registrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the pre-registration. Therefore, we see ample room for improvements in the quality of the registrations and we offer several recommendations to do so.


Assuntos
Comitês de Ética em Pesquisa , Tamanho da Amostra , Estatística como Assunto/métodos
5.
Account Res ; 24(3): 127-151, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28001440

RESUMO

Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the "storybook image of the scientist" is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than to other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and Ph.D. students, and higher levels to Ph.D. students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one's own group than to people in other groups may decrease scientists' willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science.


Assuntos
Opinião Pública , Ciência , Feminino , Humanos , Masculino , Estudantes , Estados Unidos
6.
PLoS One ; 12(3): e0172792, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28296929

RESUMO

A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants' estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.


Assuntos
Psicologia , Humanos , Itália , Projetos de Pesquisa , Estados Unidos
7.
Psychometrika ; 81(1): 33-8, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25820978

RESUMO

We respond to the commentaries Waldman and Lilienfeld (Psychometrika, 2015) and Wigboldus and Dotch (Psychometrika, 2015) provided in response to Sijtsma's (Sijtsma in Psychometrika, 2015) discussion article on questionable research practices. Specifically, we discuss the fear of an increased dichotomy between substantive and statistical aspects of research that may arise when the latter aspects are laid entirely in the hands of a statistician, remedies for false positives and replication failure, and the status of data exploration, and we provide a re-definition of the concept of questionable research practices.


Assuntos
Psicometria , Projetos de Pesquisa , Humanos
8.
Front Psychol ; 7: 1832, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27933012

RESUMO

The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.

9.
PLoS One ; 11(9): e0163251, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27684371

RESUMO

BACKGROUND: Personality influences decision making and ethical considerations. Its influence on the occurrence of research misbehavior has never been studied. This study aims to determine the association between personality traits and self-reported questionable research practices and research misconduct. We hypothesized that narcissistic, Machiavellianistic and psychopathic traits as well as self-esteem are associated with research misbehavior. METHODS: Included in this cross-sectional study design were 535 Dutch biomedical scientists (response rate 65%) from all hierarchical layers of 4 university medical centers in the Netherlands. We used validated personality questionnaires such as the Dark Triad (narcissism, psychopathy, and Machiavellianism), Rosenberg's Self-Esteem Scale, the Publication Pressure Questionnaire (PPQ), and also demographic and job-specific characteristics to investigate the association of personality traits with a composite research misbehavior severity score. FINDINGS: Machiavellianism was positively associated (beta 1.28, CI 1.06-1.53) with self-reported research misbehavior, while narcissism, psychopathy and self-esteem were not. Exploratory analysis revealed that narcissism and research misconduct were more severe among persons in higher academic ranks (i.e., professors) (p<0.01 and p<0.001, respectively), and self-esteem scores and publication pressure were lower (p<0.001 and p<0.01, respectively) as compared to postgraduate PhD fellows. CONCLUSIONS: Machiavellianism may be a risk factor for research misbehaviour. Narcissism and research misbehaviour were more prevalent among biomedical scientists in higher academic positions. These results suggest that personality has an impact on research behavior and should be taken into account in fostering responsible conduct of research.

10.
PLoS One ; 9(12): e114876, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25493918

RESUMO

Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.


Assuntos
Comportamento Cooperativo , Interpretação Estatística de Dados , Disseminação de Informação , Psicologia , Confiabilidade dos Dados , Humanos , Publicações Periódicas como Assunto/normas , Publicações Periódicas como Assunto/estatística & dados numéricos , Psicologia/normas , Psicologia/estatística & dados numéricos , Projetos de Pesquisa , Estatística como Assunto/métodos , Estatística como Assunto/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA