Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
PLoS Biol ; 22(8): e3002645, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39172747

RESUMO

Research articles published by the journal eLife are accompanied by short evaluation statements that use phrases from a prescribed vocabulary to evaluate research on 2 dimensions: importance and strength of support. Intuitively, the prescribed phrases appear to be highly synonymous (e.g., important/valuable, compelling/convincing) and the vocabulary's ordinal structure may not be obvious to readers. We conducted an online repeated-measures experiment to gauge whether the phrases were interpreted as intended. We also tested an alternative vocabulary with (in our view) a less ambiguous structure. A total of 301 participants with a doctoral or graduate degree used a 0% to 100% scale to rate the importance and strength of support of hypothetical studies described using phrases from both vocabularies. For the eLife vocabulary, most participants' implied ranking did not match the intended ranking on both the importance (n = 59, 20% matched, 95% confidence interval [15% to 24%]) and strength of support dimensions (n = 45, 15% matched [11% to 20%]). By contrast, for the alternative vocabulary, most participants' implied ranking did match the intended ranking on both the importance (n = 188, 62% matched [57% to 68%]) and strength of support dimensions (n = 201, 67% matched [62% to 72%]). eLife's vocabulary tended to produce less consistent between-person interpretations, though the alternative vocabulary still elicited some overlapping interpretations away from the middle of the scale. We speculate that explicit presentation of a vocabulary's intended ordinal structure could improve interpretation. Overall, these findings suggest that more structured and less ambiguous language can improve communication of research evaluations.

2.
R Soc Open Sci ; 11(5): 240016, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-39076822

RESUMO

Access to scientific data can enable independent reuse and verification; however, most data are not available and become increasingly irrecoverable over time. This study aimed to retrieve and preserve important datasets from 160 of the most highly-cited social science articles published between 2008-2013 and 2015-2018. We asked authors if they would share data in a public repository-the Data Ark-or provide reasons if data could not be shared. Of the 160 articles, data for 117 (73%, 95% CI [67%-80%]) were not available and data for 7 (4%, 95% CI [0%-12%]) were available with restrictions. Data for 36 (22%, 95% CI [16%-30%]) articles were available in unrestricted form: 29 of these datasets were already available and 7 datasets were made available in the Data Ark. Most authors did not respond to our data requests and a minority shared reasons for not sharing, such as legal or ethical constraints. These findings highlight an unresolved need to preserve important scientific datasets and increase their accessibility to the scientific community.

4.
R Soc Open Sci ; 10(10): 230568, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37830032

RESUMO

Background. Although preregistration can reduce researcher bias and increase transparency in primary research settings, it is less applicable to secondary data analysis. An alternative method that affords additional protection from researcher bias, which cannot be gained from conventional forms of preregistration alone, is an Explore and Confirm Analysis Workflow (ECAW). In this workflow, a data management organization initially provides access to only a subset of their dataset to researchers who request it. The researchers then prepare an analysis script based on the subset of data, upload the analysis script to a registry, and then receive access to the full dataset. ECAWs aim to achieve similar goals to preregistration, but make access to the full dataset contingent on compliance. The present survey aimed to garner information from the research community where ECAWs could be applied-employing the Avon Longitudinal Study of Parents and Children (ALSPAC) as a case example. Methods. We emailed a Web-based survey to researchers who had previously applied for access to ALSPAC's transgenerational observational dataset. Results. We received 103 responses, for a 9% response rate. The results suggest that-at least among our sample of respondents-ECAWs hold the potential to serve their intended purpose and appear relatively acceptable. For example, only 10% of respondents disagreed that ALSPAC should run a study on ECAWs (versus 55% who agreed). However, as many as 26% of respondents agreed that they would be less willing to use ALSPAC data if they were required to use an ECAW (versus 45% who disagreed). Conclusion. Our data and findings provide information for organizations and individuals interested in implementing ECAWs and related interventions. Preregistration. https://osf.io/g2fw5 Deviations from the preregistration are outlined in electronic supplementary material A.

5.
Nat Hum Behav ; 7(1): 15-26, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36707644

RESUMO

Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.


Assuntos
Processos Mentais , Projetos de Pesquisa , Humanos , Sistema de Registros
6.
R Soc Open Sci ; 9(8): 220139, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36039285

RESUMO

Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500-1200 words) and time-to-submit (median 12, IQR 4-26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400-550 words) and time-to-submit (median 4, IQR 4-6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.

7.
Annu Rev Psychol ; 73: 719-748, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34665669

RESUMO

Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.


Assuntos
Projetos de Pesquisa , Humanos , Reprodutibilidade dos Testes
8.
Perspect Psychol Sci ; 17(1): 239-251, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33682488

RESUMO

Psychologists are navigating an unprecedented period of introspection about the credibility and utility of their discipline. Reform initiatives emphasize the benefits of transparency and reproducibility-related research practices; however, adoption across the psychology literature is unknown. Estimating the prevalence of such practices will help to gauge the collective impact of reform initiatives, track progress over time, and calibrate future efforts. To this end, we manually examined a random sample of 250 psychology articles published between 2014 and 2017. Over half of the articles were publicly available (154/237, 65%, 95% confidence interval [CI] = [59%, 71%]); however, sharing of research materials (26/183; 14%, 95% CI = [10%, 19%]), study protocols (0/188; 0%, 95% CI = [0%, 1%]), raw data (4/188; 2%, 95% CI = [1%, 4%]), and analysis scripts (1/188; 1%, 95% CI = [0%, 1%]) was rare. Preregistration was also uncommon (5/188; 3%, 95% CI = [1%, 5%]). Many articles included a funding disclosure statement (142/228; 62%, 95% CI = [56%, 69%]), but conflict-of-interest statements were less common (88/228; 39%, 95% CI = [32%, 45%]). Replication studies were rare (10/188; 5%, 95% CI = [3%, 8%]), and few studies were included in systematic reviews (21/183; 11%, 95% CI = [8%, 16%]) or meta-analyses (12/183; 7%, 95% CI = [4%, 10%]). Overall, the results suggest that transparency and reproducibility-related research practices were far from routine. These findings establish baseline prevalence estimates against which future progress toward increasing the credibility and utility of psychology research can be compared.


Assuntos
Publicações , Projetos de Pesquisa , Humanos , Prevalência , Reprodutibilidade dos Testes , Revisões Sistemáticas como Assunto
9.
R Soc Open Sci ; 8(1): 201494, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33614084

RESUMO

For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

10.
PLoS One ; 15(10): e0239598, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33002031

RESUMO

Scientific claims in biomedical research are typically derived from statistical analyses. However, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity of those claims. One approach journals have taken to address this issue is to enlist expert statistical reviewers. How many journals do this, how statistical review is incorporated, and how its value is perceived by editors is of interest. Here we report an expanded version of a survey conducted more than 20 years ago by Goodman and colleagues (1998) with the intention of characterizing contemporary statistical review policies at leading biomedical journals. We received eligible responses from 107 of 364 (28%) journals surveyed, across 57 fields, mostly from editors in chief. 34% (36/107) rarely or never use specialized statistical review, 34% (36/107) used it for 10-50% of their articles and 23% used it for all articles. These numbers have changed little since 1998 in spite of dramatically increased concern about research validity. The vast majority of editors regarded statistical review as having substantial incremental value beyond regular peer review and expressed comparatively little concern about the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers. Improved statistical education of researchers and different ways of employing statistical expertise are needed. Several proposals are discussed.


Assuntos
Publicações Periódicas como Assunto , Estatística como Assunto , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Pesquisa Biomédica/estatística & dados numéricos , Interpretação Estatística de Dados , Políticas Editoriais , Humanos , Revisão por Pares , Publicações Periódicas como Assunto/normas , Publicações Periódicas como Assunto/estatística & dados numéricos , Reprodutibilidade dos Testes , Estatística como Assunto/métodos , Estatística como Assunto/normas , Inquéritos e Questionários
11.
R Soc Open Sci ; 7(2): 190806, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32257301

RESUMO

Serious concerns about research quality have catalysed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency and enhance research credibility. Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014 and 2017. Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]) or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]). Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]). Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.

12.
J Exp Psychol Appl ; 26(3): 411-421, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31971418

RESUMO

Teachers around the world hold a considerable number of misconceptions about education. Consequently, schools can become epicenters for dubious practices that might jeopardize the quality of teaching and negatively influence students' wellbeing. The main objective of this study was to assess the efficacy of refutation texts in the correction of erroneous ideas among in-service teachers. The results of Experiment 1 indicate that refutation texts can be an effective means to correct false ideas among educators, even for strongly endorsed misconceptions. However, the results of Experiment 2 suggest that these effects may be short-lived. Furthermore, attempts to correct misconceptions seemed to have no beneficial effect on teachers' intention to implement educational practices that are based on those erroneous beliefs. The implications of these results for the training of preservice and in-service teachers are discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Comunicação , Prática Clínica Baseada em Evidências , Intenção , Professores Escolares/psicologia , Adulto , Feminino , Humanos , Masculino , Instituições Acadêmicas , Espanha , Inquéritos e Questionários
14.
Trends Cogn Sci ; 23(10): 815-818, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31421987

RESUMO

Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.


Assuntos
Sistema de Registros , Pesquisa , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa
15.
R Soc Open Sci ; 5(8): 180448, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30225032

RESUMO

Access to data is a critical feature of an efficient, progressive and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data ('analytic reproducibility'). To investigate this, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles at least one value could not be reproduced despite author assistance. Importantly, there were no clear indications that original conclusions were seriously impacted. Mandatory open data policies can increase the frequency and quality of data sharing. However, suboptimal data curation, unclear analysis specification and reporting errors can impede analytic reproducibility, undermining the utility of data sharing and the credibility of scientific findings.

16.
PLoS One ; 13(8): e0201856, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30071110

RESUMO

The vast majority of scientific articles published to-date have not been accompanied by concomitant publication of the underlying research data upon which they are based. This state of affairs precludes the routine re-use and re-analysis of research data, undermining the efficiency of the scientific enterprise, and compromising the credibility of claims that cannot be independently verified. It may be especially important to make data available for the most influential studies that have provided a foundation for subsequent research and theory development. Therefore, we launched an initiative-the Data Ark-to examine whether we could retrospectively enhance the preservation and accessibility of important scientific data. Here we report the outcome of our efforts to retrieve, preserve, and liberate data from 111 of the most highly-cited articles published in psychology and psychiatry between 2006-2011 (n = 48) and 2014-2016 (n = 63). Most data sets were not made available (76/111, 68%, 95% CI [60, 77]), some were only made available with restrictions (20/111, 18%, 95% CI [10, 27]), and few were made available in a completely unrestricted form (15/111, 14%, 95% CI [5, 22]). Where extant data sharing systems were in place, they usually (17/22, 77%, 95% CI [54, 91]) did not allow unrestricted access. Authors reported several barriers to data sharing, including issues related to data ownership and ethical concerns. The Data Ark initiative could help preserve and liberate important scientific data, surface barriers to data sharing, and advance community discussions on data stewardship.


Assuntos
Curadoria de Dados , Disseminação de Informação , Psiquiatria , Psicologia , Editoração , Comunicação Acadêmica , Bibliometria , Conjuntos de Dados como Assunto , Humanos , Propriedade , Publicações Periódicas como Assunto , Comunicação Acadêmica/ética
18.
Behav Brain Sci ; 41: e132, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-31064517

RESUMO

Replication is the cornerstone of science - but when and why? Not all studies need replication, especially when resources are limited. We propose that a decision-making framework based on Bayesian philosophy of science provides a basis for choosing which studies to replicate.


Assuntos
Tomada de Decisões , Filosofia , Teorema de Bayes , Pesquisa
20.
PLoS Biol ; 14(5): e1002456, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27171007

RESUMO

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.


Assuntos
Psicologia , Editoração/organização & administração , Publicações Seriadas/estatística & dados numéricos , Análise Custo-Benefício , Disseminação de Informação , Internet , Editoração/tendências , Publicações Seriadas/economia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA