Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Ber Wiss ; 46(2-3): 233-258, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37431677

RESUMEN

For the last ten years, within molecular life sciences, the reproducibility crisis discourse has been embodied as a crisis of trust in scientific images. Beyond the contentious perception of "questionable research practices" associated with a digital turn in the production of images, this paper highlights the transformations of gel electrophoresis as a family of experimental techniques. Our aim is to analyze the evolving epistemic status of generated images and its connection with a crisis of trust in images within that field. From the 1980s to the 2000s, we identify two key innovations (precast gels and gel docs) leading to a "two-tiered" gel electrophoresis with different standardization procedures, different epistemic statuses of the produced images and different ways of generating (dis)trust in images. The first tier, exemplified by differential gel electrophoresis (DIGE), is characterized by specialized devices processing images as quantitative data. The second tier, exemplified by polyacrylamide gel electrophoresis (PAGE), is described as a routine technique making use of image as qualitative "virtual witnessing." The difference between these two tiers is particularly apparent in the ways images are processed, even though both tiers involve image digitization. Our account thus highlights different views on reproducibility within the two tiers. Comparability of images is insisted upon in the first tier while traceability is expected in the second tier. It is striking that these differences occur not only within the same scientific field, but even within the same family of experimental techniques. In the second tier, digitization entails distrust, whereas it implies a collective sentiment of trust in the first tier.


Asunto(s)
Proteómica , Electroforesis en Gel Bidimensional/métodos , Reproducibilidad de los Resultados , Electroforesis en Gel de Poliacrilamida , Estándares de Referencia
2.
J Med Internet Res ; 24(6): e37324, 2022 06 27.
Artículo en Inglés | MEDLINE | ID: mdl-35759334

RESUMEN

BACKGROUND: Improving rigor and transparency measures should lead to improvements in reproducibility across the scientific literature; however, the assessment of measures of transparency tends to be very difficult if performed manually. OBJECTIVE: This study addresses the enhancement of the Rigor and Transparency Index (RTI, version 2.0), which attempts to automatically assess the rigor and transparency of journals, institutions, and countries using manuscripts scored on criteria found in reproducibility guidelines (eg, Materials Design, Analysis, and Reporting checklist criteria). METHODS: The RTI tracks 27 entity types using natural language processing techniques such as Bidirectional Long Short-term Memory Conditional Random Field-based models and regular expressions; this allowed us to assess over 2 million papers accessed through PubMed Central. RESULTS: Between 1997 and 2020 (where data were readily available in our data set), rigor and transparency measures showed general improvement (RTI 2.29 to 4.13), suggesting that authors are taking the need for improved reporting seriously. The top-scoring journals in 2020 were the Journal of Neurochemistry (6.23), British Journal of Pharmacology (6.07), and Nature Neuroscience (5.93). We extracted the institution and country of origin from the author affiliations to expand our analysis beyond journals. Among institutions publishing >1000 papers in 2020 (in the PubMed Central open access set), Capital Medical University (4.75), Yonsei University (4.58), and University of Copenhagen (4.53) were the top performers in terms of RTI. In country-level performance, we found that Ethiopia and Norway consistently topped the RTI charts of countries with 100 or more papers per year. In addition, we tested our assumption that the RTI may serve as a reliable proxy for scientific replicability (ie, a high RTI represents papers containing sufficient information for replication efforts). Using work by the Reproducibility Project: Cancer Biology, we determined that replication papers (RTI 7.61, SD 0.78) scored significantly higher (P<.001) than the original papers (RTI 3.39, SD 1.12), which according to the project required additional information from authors to begin replication efforts. CONCLUSIONS: These results align with our view that RTI may serve as a reliable proxy for scientific replicability. Unfortunately, RTI measures for journals, institutions, and countries fall short of the replicated paper average. If we consider the RTI of these replication studies as a target for future manuscripts, more work will be needed to ensure that the average manuscript contains sufficient information for replication attempts.


Asunto(s)
Lista de Verificación , Edición , Humanos , Noruega , Reproducibilidad de los Resultados , Proyectos de Investigación
3.
Altern Lab Anim ; 50(5): 330-338, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35983799

RESUMEN

Cell culture techniques are strongly connected with modern scientific laboratories and production facilities. Thus, choosing the most suitable medium for the cells involved is vital, not only directly to optimise cell viability but also indirectly to maximise the reliability of the experiments performed with the cells. Fetal bovine or calf serum (FBS or FCS, respectively) is the most commonly used cell culture medium supplement, providing various nutritional factors and macromolecules essential for cell growth. Yet, the use of FBS encompasses a number of disadvantages. Scientifically, one of the most severe disadvantages is the lot-to-lot variability of animal sera that hampers reproducibility. Therefore, transitioning from the use of these ill-defined, component-variable, inconsistent, xenogenic, ethically questionable and even potentially infectious media supplements, is key to achieving better data reproducibility and thus better science. To demonstrate that the transition to animal component-free cell culture is possible and achievable, we highlight three different scenarios and provide some case studies of each, namely: i) the adaptation of single cell lines to animal component-free culture conditions by the replacement of FBS and trypsin; ii) the adaptation of multicellular models to FBS-free conditions; and (iii) the replacement of FBS with human platelet lysate (hPL) for the generation of primary stem/stromal cell cultures for clinical purposes. By highlighting these examples, we aim to foster and support the global movement towards more consistent science and provide evidence that it is indeed possible to step out of the currently smouldering scientific reproducibility crisis.


Asunto(s)
Células Madre Mesenquimatosas , Animales , Bovinos , Técnicas de Cultivo de Célula/métodos , Diferenciación Celular , Proliferación Celular , Células Cultivadas , Humanos , Reproducibilidad de los Resultados , Tripsina
4.
Curr Psychol ; : 1-12, 2022 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-35250242

RESUMEN

Amidst a worldwide vaccination campaign, trust in science plays a significant role when addressing the COVID-19 pandemic. Given current concerns regarding research standards, we were interested in how Spanish scholars perceived COVID-19 research and the extent to which questionable research practices and potentially problematic academic incentives are commonplace. We asked researchers to evaluate the expected quality of their COVID-19 projects and other peers' research and compared these assessments with those from scholars not involved in COVID-19 research. We investigated self-admitting and estimated rates of questionable research practices and attitudes towards current research status. Responses from 131 researchers suggested that COVID-19 evaluations followed partisan lines, with scholars being more pessimistic about others' colleagues' research than their own. Additionally,researchers not involved in COVID-19 projects were more negative than their participating peers. These differences were particularly notable for areas such as the expected theoretical foundations or overall quality of the research, among others. Most Spanish scholars expected questionable research practices and inadequate incentives to be widespread. In these two aspects, researchers tended to agree regardless of their involvement in COVID-19 research. We provide specific recommendations for improving future meta-science studies, such as redefining QRPs as inadequate research practices (IRP). This change could help avoid key controversies regarding QRPs' definition while highlighting their detrimental impact. Lastly, we join previous calls to improve transparency and academic career incentives as a cornerstone for generating trust in science. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s12144-022-02797-6.

5.
Stud Hist Philos Sci ; 93: 11-20, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35247820

RESUMEN

Epistemic trust among scientists is inevitable. There are two questions about this: (1) What is the content of this trust, what do scientists trust each other for? (2) Is such trust epistemically justified? I argue that if we assume a traditional answer to (1), namely that scientists trust each other to be reliable informants, then the answer to question (2) is negative, certainly for the biomedical and social sciences. This motivates a different construal of trust among scientists and therefore a different answer to (1): scientists trust each other to only testify to claims that are backed by evidence gathered in accordance with prevailing methodological standards. On this answer, trust among scientists is epistemically justified.


Asunto(s)
Confianza
6.
Am J Epidemiol ; 190(10): 2172-2177, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33834188

RESUMEN

Programming for data wrangling and statistical analysis is an essential technical tool of modern epidemiology, yet many epidemiologists receive limited formal training in strategies to optimize the quality of our code. In complex projects, coding mistakes are easy to make, even for skilled practitioners. Such mistakes can lead to invalid research claims that reduce the credibility of the field. Code review is a straightforward technique used by the software industry to reduce the likelihood of coding bugs. The systematic implementation of code review in epidemiologic research projects could not only improve science but also decrease stress, accelerate learning, contribute to team building, and codify best practices. In the present article, we argue for the importance of code review and provide some recommendations for successful implementation for 1) the research laboratory, 2) the code author (the initial programmer), and 3) the code reviewer. We outline a feasible strategy for implementation of code review, though other successful implementation processes are possible to accommodate the resources and workflows of different research groups, including other practices to improve code quality. Code review isn't always glamorous, but it is critically important for science and reproducibility. Humans are fallible; that's why we need code review.


Asunto(s)
Benchmarking/métodos , Interpretación Estadística de Datos , Mediciones Epidemiológicas , Epidemiología/normas , Validación de Programas de Computación , Diseño de Investigaciones Epidemiológicas , Epidemiología/educación , Estudios de Factibilidad , Humanos , Ciencia de la Implementación , Reproducibilidad de los Resultados , Flujo de Trabajo
7.
Conserv Biol ; 35(5): 1615-1626, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33751669

RESUMEN

Arbitrary modeling choices are inevitable in scientific studies. Yet, few empirical studies in conservation science report the effects these arbitrary choices have on estimated results. I explored the effects of subjective modeling choices in the context of counterfactual impact evaluations. Over 5000 candidate models based on reasonable changes in the choice of statistical matching algorithms (e.g., genetic and nearest distance mahalanobis matching), the parametrization of these algorithms (e.g., number of matches), and the inclusion of specific covariates (e.g., distance to nearest city, slope, or rainfall) were valid for studying the effect of Virunga National Park in Democratic Republic of the Congo on changes in tree cover loss and carbon storage over time. I randomly picked 2000 of the 5000 candidate models to determine how much and which subjective modeling choices affected the results the most. All valid models indicated that tree cover loss decreased and carbon storage increased in Virunga National Park from 2000 to 2019. Nonetheless, the order of magnitude of the estimates varied by a factor of 3 (from -4.78 to -13.12 percentage points decrease in tree cover loss and from 20 to 46 t Ce/ha for carbon storage). My results highlight that modeling choices, notably the choice of the matching algorithm, can have significant effects on point estimates and suggest that more structured robustness checks are a key step toward more credible findings in conservation science.


Selecciones Subjetivas de Modelos y la Contundencia de las Evaluaciones de Impacto en las Ciencias de la Conservación Resumen Las selecciones arbitrarias de modelos son inevitables en los estudios científicos. Sin embargo, pocos estudios empíricos en las ciencias de la conservación reportan los efectos de estas selecciones arbitrarias sobre los resultados estimados. Exploré los efectos de las selecciones subjetivas de modelos en el contexto de las evaluaciones de impacto contrafactuales. Más de 5000 modelos candidatos basados en cambios razonables en la elección de los algoritmos de emparejamiento estadístico (p. ej.: emparejamiento genético y de distancia más cercana mahalanobis), la parametrización de estos algoritmos (p. ej.: número de parejas) y la inclusión de covariados específicos (p. ej.: distancia a la ciudad más cercana, inclinación, precipitación) fueron válidos para estudiar el efecto del Parque Nacional Virunga en la República Democrática del Congo sobre la pérdida de cobertura arbórea y el almacenamiento de carbono a través del tiempo. Escogí al azar 2000 de los 5000 modelos candidatos para determinar cuántos y cuáles selecciones subjetivas de los modelos afectaron más al resultado. Todos los modelos válidos indicaron que la pérdida de la cobertura arbórea disminuyó y el almacenamiento de carbono incrementó en el Parque Nacional Virunga desde entre el año 2000 y el 2019. No obstante, el orden de magnitud de las estimaciones varió con un factor de 3 (una disminución de -4.78 hasta -13.12 puntos porcentuales en la pérdida de la cobertura arbórea y de 20 hasta 46 t Ce/ha para el almacenamiento de carbono). Mis resultados resaltan que la selección de los modelos, notablemente la elección del algoritmo de emparejamiento, puede tener efectos significativos sobre las estimaciones de puntos y sugieren que las revisiones más estructuradas de la contundencia son un paso importante hacia descubrimientos más creíbles en las ciencias de la conservación.


Asunto(s)
Conservación de los Recursos Naturales , Árboles , Parques Recreativos
8.
Bioessays ; 41(1): e1800206, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30485465

RESUMEN

The overflow of scientific literature stimulates poor reading habits which can aggravate science's reproducibility crisis. Thus, solving the reproducibility crisis demands not only methodological changes, but also changes in our relationship with the scientific literature, especially our reading habits. Importantly, this does not mean reading more, it means reading better.


Asunto(s)
Lectura , Reproducibilidad de los Resultados , Investigación
9.
J Cell Sci ; 131(10)2018 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-29764917

RESUMEN

Commercial research antibodies are crucial tools in modern cell biology and biochemistry. In the USA some $2 billion a year are spent on them, but many are apparently not fit-for-purpose, and this may contribute to the 'reproducibility crisis' in biological sciences. Inadequate antibody validation and characterization, lack of user awareness, and occasional incompetence amongst suppliers have had immense scientific and personal costs. In this Opinion, I suggest some paths to make the use of these vital tools more successful. I have attempted to summarize and extend expert views from the literature to suggest that sustained routine efforts should made in: (1) the validation of antibodies, (2) their identification, (3) communication and controls, (4) the training of potential users, (5) the transparency of original equipment manufacturer (OEM) marketing agreements, and (5) in a more widespread use of recombinant antibodies (together denoted the 'VICTOR' approach).


Asunto(s)
Anticuerpos/análisis , Anticuerpos/economía , Investigación Biomédica/educación , Animales , Anticuerpos/genética , Anticuerpos/inmunología , Investigación Biomédica/economía , Comunicación , Humanos , Proteínas Recombinantes/genética , Proteínas Recombinantes/inmunología , Reproducibilidad de los Resultados , Enseñanza
10.
Gen Comp Endocrinol ; 285: 113226, 2020 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-31374286

RESUMEN

A "reproducibility crisis" is widespread across scientific disciplines, where results and conclusions of studies are not supported by subsequent investigation. Here we provide a steroid immunoassay example where human errors generated unreproducible results and conclusions. Our study was triggered by a scientific report citing abnormally high concentrations (means of 4-79 ng L-1) of three natural sex steroids [11-ketotestosterone (11-KT), testosterone (T) and oestradiol (E2)] in water samples collected from two UK rivers over 4 years (2002-6). Furthermore, the data suggested that trout farms were a major source because reported steroid concentrations were 1.3-6 times higher downstream than upstream. We hypothesised that the reported levels were erroneous due to substances co-extracted from the water causing matrix effects (i.e. "false positives") during measurement by enzyme-linked immunoassay (EIA). Thus, in collaboration with three other groups (including the one that had conducted the 2002-6 study), we carried out field sampling and assaying to examine this hypothesis. Water samples were collected in 2010 from the same sites and prepared for assay using an analogous method [C18 solid phase extraction (SPE) followed by extract clean-up with aminopropyl SPE]. Additional quality control ("spiked" and "blank") samples were processed. Water extracts were assayed for steroids using radioimmunoassay (RIA) as well as EIA. Although there were statistically significant differences between EIA and RIA (and laboratories), there was no indication of matrix effects in the EIAs. Both the EIAs and RIAs (uncorrected for recovery) measured all three natural steroids at <0.6 ng L-1 in all river water samples, indicating that the trout farms were not a significant source of natural steroids. The differences between the two studies were considerable: E2 and T concentrations were ca. 100-fold lower and 11-KT ca. 1000-fold lower than those reported in the 2002-6 study. In the absence of evidence for any marked changes in husbandry practice (e.g. stock, diet) or environmental conditions (e.g. water flow rate) between the study periods, we concluded that calculation errors were probably made in the first (2002-6) study associated with confusion between extract and water sample concentrations. The second (2010) study also had several identified examples of calculation error (use of an incorrect standard curve; extrapolation below the minimum standard; confusion of assay dilutions during result work-up; failure to correct for loss during extraction) and an example of sample contamination. Similar and further errors have been noted in other studies. It must be recognised that assays do not provide absolute measurements and are prone to a variety of errors, so published steroid levels should be viewed with caution until independently confirmed.


Asunto(s)
Acuicultura , Agua Dulce , Inmunoensayo/métodos , Esteroides/análisis , Trucha/inmunología , Animales , Ensayo de Inmunoadsorción Enzimática , Radioinmunoensayo , Estándares de Referencia , Reproducibilidad de los Resultados , Ríos , Agua/química
13.
J Neurosci Methods ; 401: 109992, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-37884081

RESUMEN

Life sciences are currently facing a reproducibility crisis. Originally, the crisis was born out of single alarming failures to reproduce findings at different times and locations. Nowadays, systematic studies indicate that the prevalence of irreproducible research does in fact exceed 50%. Viewed from a rather cynical perspective, Fett's law of the lab "Never replicate a successful experiment" has thus taken on a completely new meaning. In this respect, animal research has come under particular scrutiny, as the stakes are high in terms of both research ethics and societal impact. To counteract this, it is essential to identify sources of poor reproducibility as well as to iron out these failures. We here review the current debate, briefly discuss potential reasons, and summarize steps that have already been undertaken to improve reproducibility in animal research. By the example of classical behavioural phenotyping studies, we particularly highlight the role strict standardization plays in exacerbating the crisis, and review the concept of systematic heterogenization as an alternative strategy to deal with variation in animal studies. Briefly, we argue that systematic variation rather than strict homogenization of experimental conditions benefits the robustness of research findings, and hence their reproducibility. To this end, we will present concrete examples for systematically heterogenized experiments and provide a practical guide on how to apply systematic heterogenization in experimental practice.


Asunto(s)
Experimentación Animal , Animales , Reproducibilidad de los Resultados , Proyectos de Investigación
14.
Sci Total Environ ; 857(Pt 2): 159395, 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36257434

RESUMEN

It is unusual, and can be difficult, for scientists to reflect in their publications on any limitations their research had. This is a consequence of the extreme pressure that scientists are under to 'publish or perish'. The inevitable consequence is that much published research is not as good as it could, and should, be, leading to the current 'reproducibility crisis'. Approaches to address this crisis are required. Our suggestion is to include a 'Limitations' section in all scientific papers. Evidence is provided showing that such a section must be mandatory. Adding a 'Limitations' section to scientific papers would greatly increase honesty, openness and transparency, to the considerable benefit of both the scientific community and society in general. This suggestion is applicable to all scientific disciplines. Finally, we apologise if our suggestion has already been made by others.


Asunto(s)
Publicaciones , Reproducibilidad de los Resultados
15.
Trends Ecol Evol ; 37(3): 203-210, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34799145

RESUMEN

Despite much criticism, black-or-white null-hypothesis significance testing with an arbitrary P-value cutoff still is the standard way to report scientific findings. One obstacle to progress is likely a lack of knowledge about suitable alternatives. Here, we suggest language of evidence that allows for a more nuanced approach to communicate scientific findings as a simple and intuitive alternative to statistical significance testing. We provide examples for rewriting results sections in research papers accordingly. Language of evidence has previously been suggested in medical statistics, and it is consistent with reporting approaches of international research networks, like the Intergovernmental Panel on Climate Change, for example. Instead of re-inventing the wheel, ecology and evolution might benefit from adopting some of the 'good practices' that exist in other fields.


Asunto(s)
Ecología , Proyectos de Investigación , Cambio Climático
16.
Antibodies (Basel) ; 11(2)2022 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-35466280

RESUMEN

During the SARS-CoV-2 pandemic, many virus-binding monoclonal antibodies have been developed for clinical and diagnostic purposes. This underlines the importance of antibodies as universal bioanalytical reagents. However, little attention is given to the reproducibility crisis that scientific studies are still facing to date. In a recent study, not even half of all research antibodies mentioned in publications could be identified at all. This should spark more efforts in the search for practical solutions for the traceability of antibodies. For this purpose, we used 35 monoclonal antibodies against SARS-CoV-2 to demonstrate how sequence-independent antibody identification can be achieved by simple means applied to the protein. First, we examined the intact and light chain masses of the antibodies relative to the reference material NIST-mAb 8671. Already half of the antibodies could be identified based solely on these two parameters. In addition, we developed two complementary peptide mass fingerprinting methods with MALDI-TOF-MS that can be performed in 60 min and had a combined sequence coverage of over 80%. One method is based on the partial acidic hydrolysis of the protein by 5 mM of sulfuric acid at 99 °C. Furthermore, we established a fast way for a tryptic digest without an alkylation step. We were able to show that the distinction of clones is possible simply by a brief visual comparison of the mass spectra. In this work, two clones originating from the same immunization gave the same fingerprints. Later, a hybridoma sequencing confirmed the sequence identity of these sister clones. In order to automate the spectral comparison for larger libraries of antibodies, we developed the online software ABID 2.0. This open-source software determines the number of matching peptides in the fingerprint spectra. We propose that publications and other documents critically relying on monoclonal antibodies with unknown amino acid sequences should include at least one antibody fingerprint. By fingerprinting an antibody in question, its identity can be confirmed by comparison with a library spectrum at any time and context.

17.
Soc Stud Sci ; 51(4): 583-605, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33764246

RESUMEN

A series of failed replications and frauds have raised questions regarding self-correction in science. Metascientific activists have advocated policies that incentivize replications and make them more diagnostically potent. We argue that current debates, as well as research in science and technology studies, have paid little heed to a key dimension of replication practice. Although it sometimes serves a diagnostic function, replication is commonly motivated by a practical desire to extend research interests. The resulting replication, which we label 'integrative', is characterized by a pragmatic flexibility toward protocols. The goal is to appropriate what is useful, not test for truth. Within many experimental cultures, however, integrative replications can produce replications of ambiguous diagnostic power. Based on interviews with 60 members of the Board of Reviewing Editors for the journal Science, we show how the interplay between the diagnostic and integrative motives for replication differs between fields and produces different cultures of replication. We offer six theses that aim to put science and technology studies and science activism into dialog to show why effective reforms will need to confront issues of disciplinary difference.


Asunto(s)
Políticas , Etiquetado de Productos
18.
Public Underst Sci ; 30(8): 1008-1023, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34000907

RESUMEN

This study examines the effects of exposure to media narratives about science on perceptions pertaining to the reliability of science, including trust, beliefs, and support for science. In an experiment (n = 4497), participants were randomly assigned to read stories representing ecologically valid media narratives: the honorable quest, counterfeit quest, crisis or broken, and problem explored. Exposure to stories highlighting problems reduced trust in scientists and induced negative beliefs about scientists, with more extensive effects among those exposed to the "crisis/broken" accounts and fewer for those exposed to "counterfeit" and "problem explored" stories. In the "crisis/broken" and "problem explored" conditions, we identified a three-way interaction in which those with higher trust who considered the problem-focused stories to be representative of science were more likely to believe science is self-correcting and those with lower trust who perceived the stories to be representative were less likely to report that belief. Support for funding science was not affected by the stories. This study demonstrates the detrimental consequences of media failure to accurately communicate the scientific process, and provides evidence for ways for scientists and journalists to improve science communication, while acknowledging the need for changes in structural incentives to obtain such a goal.


Asunto(s)
Comunicación , Confianza , Humanos , Medios de Comunicación de Masas , Reproducibilidad de los Resultados
19.
J Cereb Blood Flow Metab ; 41(10): 2778-2796, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33993794

RESUMEN

The reproducibility of findings is a compelling methodological problem that the neuroimaging community is facing these days. The lack of standardized pipelines for image processing, quantification and statistics plays a major role in the variability and interpretation of results, even when the same data are analysed. This problem is well-known in MRI studies, where the indisputable value of the method has been complicated by a number of studies that produce discrepant results. However, any research domain with complex data and flexible analytical procedures can experience a similar lack of reproducibility. In this paper we investigate this issue for brain PET imaging. During the 2018 NeuroReceptor Mapping conference, the brain PET community was challenged with a computational contest involving a simulated neurotransmitter release experiment. Fourteen international teams analysed the same imaging dataset, for which the ground-truth was known. Despite a plurality of methods, the solutions were consistent across participants, although not identical. These results should create awareness that the increased sharing of PET data alone will only be one component of enhancing confidence in neuroimaging results and that it will be important to complement this with full details of the analysis pipelines and procedures that have been used to quantify data.


Asunto(s)
Neuroimagen/métodos , Tomografía de Emisión de Positrones/métodos , Congresos como Asunto , Femenino , Historia del Siglo XXI , Humanos , Masculino , Reproducibilidad de los Resultados
20.
PeerJ ; 9: e11140, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33976964

RESUMEN

Scientific experiments and research practices vary across disciplines. The research practices followed by scientists in each domain play an essential role in the understandability and reproducibility of results. The "Reproducibility Crisis", where researchers find difficulty in reproducing published results, is currently faced by several disciplines. To understand the underlying problem in the context of the reproducibility crisis, it is important to first know the different research practices followed in their domain and the factors that hinder reproducibility. We performed an exploratory study by conducting a survey addressed to researchers representing a range of disciplines to understand scientific experiments and research practices for reproducibility. The survey findings identify a reproducibility crisis and a strong need for sharing data, code, methods, steps, and negative and positive results. Insufficient metadata, lack of publicly available data, and incomplete information in study methods are considered to be the main reasons for poor reproducibility. The survey results also address a wide number of research questions on the reproducibility of scientific results. Based on the results of our explorative study and supported by the existing published literature, we offer general recommendations that could help the scientific community to understand, reproduce, and reuse experimental data and results in the research data lifecycle.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda