RESUMO
Early career researchers (ECRs) are important stakeholders leading efforts to catalyze systemic change in research culture and practice. Here, we summarize the outputs from a virtual unconventional conference (unconference), which brought together 54 invited experts from 20 countries with extensive experience in ECR initiatives designed to improve the culture and practice of science. Together, we drafted 2 sets of recommendations for (1) ECRs directly involved in initiatives or activities to change research culture and practice; and (2) stakeholders who wish to support ECRs in these efforts. Importantly, these points apply to ECRs working to promote change on a systemic level, not only those improving aspects of their own work. In both sets of recommendations, we underline the importance of incentivizing and providing time and resources for systems-level science improvement activities, including ECRs in organizational decision-making processes, and working to dismantle structural barriers to participation for marginalized groups. We further highlight obstacles that ECRs face when working to promote reform, as well as proposed solutions and examples of current best practices. The abstract and recommendations for stakeholders are available in Dutch, German, Greek (abstract only), Italian, Japanese, Polish, Portuguese, Spanish, and Serbian.
Assuntos
Pesquisadores , Relatório de Pesquisa , Humanos , Poder PsicológicoRESUMO
Memory extinction involves the formation of a new associative memory that inhibits a previously conditioned association. Nonetheless, it could also depend on weakening of the original memory trace if extinction is assumed to have multiple components. The phosphatase calcineurin (CaN) has been described as being involved in extinction but not in the initial consolidation of fear learning. With this in mind, we set to study whether CaN could have different roles in distinct components of extinction. Systemic treatment with the CaN inhibitors cyclosporin A (CsA) or FK-506, as well as i.c.v. administration of CsA, blocked within-session, but not between-session extinction or initial learning of contextual fear conditioning. Similar effects were found in multiple-session extinction of contextual fear conditioning and in auditory fear conditioning, indicating that CaN is involved in different types of short-term extinction. Meanwhile, inhibition of protein synthesis by cycloheximide (CHX) treatment did not affect within-session extinction, but disrupted fear acquisition and slightly impaired between-session extinction. Our results point to a dissociation of within- and between-session extinction of fear conditioning, with the former being more dependent on CaN activity and the latter on protein synthesis. Moreover, the modulation of within-session extinction did not affect between-session extinction, suggesting that these components are at least partially independent.
Assuntos
Calcineurina/fisiologia , Condicionamento Clássico/fisiologia , Extinção Psicológica/fisiologia , Medo/fisiologia , Animais , Calcineurina/farmacologia , Inibidores de Calcineurina/farmacologia , Condicionamento Clássico/efeitos dos fármacos , Ciclosporina/farmacologia , Extinção Psicológica/efeitos dos fármacos , Medo/efeitos dos fármacos , Infusões Intraventriculares , Masculino , Camundongos , Atividade Motora/efeitos dos fármacos , Tacrolimo/farmacologiaRESUMO
INTRODUCTION: Translation is about successfully bringing findings from preclinical contexts into the clinic. This transfer is challenging as clinical trials frequently fail despite positive preclinical results. Limited robustness of preclinical research has been marked as one of the drivers of such failures. One suggested solution is to improve the external validity of in vitro and in vivo experiments via a suite of complementary strategies. AREAS COVERED: In this review, the authors summarize the literature available on different strategies to improve external validity in in vivo, in vitro, or ex vivo experiments; systematic heterogenization; generalizability tests; and multi-batch and multicenter experiments. Articles that tested or discussed sources of variability in systematically heterogenized experiments were identified, and the most prevalent sources of variability are reviewed further. Special considerations in sample size planning, analysis options, and practical feasibility associated with each strategy are also reviewed. EXPERT OPINION: The strategies reviewed differentially influence variation in experiments. Different research projects, with their unique goals, can leverage the strengths and limitations of each strategy. Applying a combination of these approaches in confirmatory stages of preclinical research putatively increases the chances of success in clinical studies.
Assuntos
Pesquisa Translacional Biomédica , Humanos , Pesquisa Translacional Biomédica/métodos , Estudos Multicêntricos como AssuntoRESUMO
BACKGROUND: Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader's ability to independently interpret data and reproduce findings. METHODS: In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. RESULTS: Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. CONCLUSIONS: Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.
RESUMO
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Assuntos
Condicionamento Psicológico , Medo , Animais , Reações Falso-Positivas , Fator de Impacto de Revistas , Camundongos , Modelos Estatísticos , Ratos , Reprodutibilidade dos Testes , Projetos de Pesquisa , Roedores , Tamanho da Amostra , Estatística como AssuntoRESUMO
The concepts of effect size and statistical power are often disregarded in basic neuroscience, and most articles in the field draw their conclusions solely based on the arbitrary significance thresholds of statistical inference tests. Moreover, studies are often underpowered, making conclusions from significance tests less reliable. With this in mind, we present the protocol of a systematic review to study the distribution of effect sizes and statistical power in the rodent fear conditioning literature, and to analyze how these factors influence the description and publication of results. To do this we will conduct a search in PubMed for "fear conditioning" AND ("mouse" OR "mice" OR "rat" OR "rats") and obtain all articles published online in 2013. Experiments will be included if they: (a) describe the effect(s) of a single intervention on fear conditioning acquisition or consolidation; (b) have a control group to which the experimental group is compared; (c) use freezing as a measure of conditioned fear; and (d) have available data on mean freezing, standard deviation and sample size of each group and on the statistical significance of the comparison. We will use the extracted data to calculate the distribution of effect sizes in these experiments, as well as the distribution of statistical power curves for detecting a range of differences at a threshold of α=0.05. We will assess correlations between these variables and (a) the chances of a result being statistically significant, (b) the way the result is described in the article text, (c) measures to reduce risk of bias in the article and (d) the impact factor of the journal and the number of citations of the article. We will also perform analyses to see whether effect sizes vary systematically across species, gender, conditioning protocols or intervention types.