ABSTRACT
INTRODUCTION: The development and use of digital tools in various stages of research highlight the importance of novel open science methods for an integrated and accessible research system. The objective of this study was to design and validate a conceptual model of open science on healthcare research processes. METHODS: This research was conducted in three phases using a mixed-methods approach. The first phase employed a qualitative method, namely purposive sampling and semi-structured interview guides to collect data from healthcare researchers and managers. Influential factors of open science on research processes were extracted for refining the components and developing the proposed model; the second phase utilized a panel of experts and collective agreement through purposive sampling. The final phase involved purposive sampling and Delphi technique to validate the components of the proposed model according to researchers' perspectives. FINDINGS: From the thematic analysis of 20 interview on the study topic, 385 codes, 38 sub-themes, and 14 main themes were extracted for the initial proposed model. These components were reviewed by expert panel members, resulting in 31 sub-themes, 13 main themes, and 4 approved themes. Ultimately, the agreed-upon model was assessed in four layers for validation by the expert panel, and all the components achieved a score of > 75% in two Delphi rounds. The validated model was presented based on the infrastructure and culture layers, as well as supervision, assessment, publication, and sharing. CONCLUSION: To effectively implement these methods in the research process, it is essential to create cultural and infrastructural backgrounds and predefined requirements for preventing potential abuses and privacy concerns in the healthcare system. Applying these principles will lead to greater access to outputs, increasing the credibility of research results and the utilization of collective intelligence in solving healthcare system issues.
Subject(s)
Delivery of Health Care , Health Services Research , Humans , Research Design , Delphi TechniqueABSTRACT
The impact and effectiveness of clinical trial data sharing initiatives may differ depending on the data sharing model used. We characterized outcomes associated with models previously used by the U.S. National Institutes of Health (NIH): National Heart, Lung, and Blood Institute's (NHLBI) centralized model and National Cancer Institute's (NCI) decentralized model. We identified trials completed in 2010-2013 that met NIH data sharing criteria and matched studies based on cost and/or size, determining whether trial data were shared, and for those that were, the frequency of secondary internal publications (authored by at least one author from the original research team) and shared data publications (authored by a team external to the original research team). We matched 77 NHLBI-funded trials to 77 NCI-funded trials; among these, 20 NHLBI-sponsored trials (26%) and 4 NCI-sponsored trials (5%) shared data (OR 6.4, 95% CI: 2.1, 19.8). From the 4 NCI-sponsored trials sharing data, we identified 65 secondary internal and 2 shared data publications. From the 20 NHLBI-sponsored trials sharing data, we identified 188 secondary internal and 53 shared data publications. The NHLBI's centralized data sharing model was associated with more trials sharing data and more shared data publications when compared with the NCI's decentralized model.
Subject(s)
Clinical Trials as Topic , Information Dissemination , National Institutes of Health (U.S.) , Cross-Sectional Studies , National Cancer Institute (U.S.) , United StatesABSTRACT
OBJECTIVES: To synthesise research investigating data and code sharing in medicine and health to establish an accurate representation of the prevalence of sharing, how this frequency has changed over time, and what factors influence availability. DESIGN: Systematic review with meta-analysis of individual participant data. DATA SOURCES: Ovid Medline, Ovid Embase, and the preprint servers medRxiv, bioRxiv, and MetaArXiv were searched from inception to 1 July 2021. Forward citation searches were also performed on 30 August 2022. REVIEW METHODS: Meta-research studies that investigated data or code sharing across a sample of scientific articles presenting original medical and health research were identified. Two authors screened records, assessed the risk of bias, and extracted summary data from study reports when individual participant data could not be retrieved. Key outcomes of interest were the prevalence of statements that declared that data or code were publicly or privately available (declared availability) and the success rates of retrieving these products (actual availability). The associations between data and code availability and several factors (eg, journal policy, type of data, trial design, and human participants) were also examined. A two stage approach to meta-analysis of individual participant data was performed, with proportions and risk ratios pooled with the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis. RESULTS: The review included 105 meta-research studies examining 2 121 580 articles across 31 specialties. Eligible studies examined a median of 195 primary articles (interquartile range 113-475), with a median publication year of 2015 (interquartile range 2012-2018). Only eight studies (8%) were classified as having a low risk of bias. Meta-analyses showed a prevalence of declared and actual public data availability of 8% (95% confidence interval 5% to 11%) and 2% (1% to 3%), respectively, between 2016 and 2021. For public code sharing, both the prevalence of declared and actual availability were estimated to be <0.5% since 2016. Meta-regressions indicated that only declared public data sharing prevalence estimates have increased over time. Compliance with mandatory data sharing policies ranged from 0% to 100% across journals and varied by type of data. In contrast, success in privately obtaining data and code from authors historically ranged between 0% and 37% and 0% and 23%, respectively. CONCLUSIONS: The review found that public code sharing was persistently low across medical research. Declarations of data sharing were also low, increasing over time, but did not always correspond to actual sharing of data. The effectiveness of mandatory data sharing policies varied substantially by journal and type of data, a finding that might be informative for policy makers when designing policies and allocating resources to audit compliance. SYSTEMATIC REVIEW REGISTRATION: Open Science Framework doi:10.17605/OSF.IO/7SX8U.
Subject(s)
Biomedical Research , Medicine , Humans , Prevalence , Administrative Personnel , Information DisseminationABSTRACT
BACKGROUND: Sharing research outputs with open science methods for different stakeholders causes better access to different studies to solve problems in diverse fields, which leads to equal access conditions to research resources, as well as greater scientific productivity. Therefore, the aim of this study was to perceive the concept of openness in research among Iranian health researchers. METHODS: From the beginning of August to the middle of November 2021, twenty semi-structured interviews were held with Iranian health researchers from different fields using purposeful, snowball, and convenience sampling. The interviews continued until data saturation. Data analysis was performed with thematic analysis using MAXQDA 20. Finally, seven main issues related to open science were identified. RESULTS: Through analysis of the interviews, 235 primary codes and 173 main codes were extracted in 22 subclasses. After careful evaluation and integration of subclasses and classes, they were finally classified into nine categories and three main themes. Analysis showed that openness in research was related to three main themes: researchers' understanding of open science, the impact of open science on publication and sharing of research, concerns and reluctance to open research. CONCLUSION: The conditions of access to research output should be specified given the diversity of studies conducted in the field of health; issues like privacy as an important topic of access to data and information in the health system should also be specified. Our analysis indicated that the conditions of publication and sharing of research processes should be stated according to different scopes of health fields. The concept of open science was related to access to findings and other research items regardless of cost, political, social, or racial barriers, which could create collective wisdom in the development of knowledge. The process of publication and sharing of research related to open access applies to all types of outputs, conditions of access, increasing trust in research, creation of diverse publication paths, and broader participation of citizens in research. Open science practices should be promoted to increase the circulation and exploitation rates of knowledge while adjusting and respecting the limits of privacy, intellectual property and national security rights of countries.
Subject(s)
Privacy , Research Personnel , Humans , Iran , Trust , KnowledgeABSTRACT
BACKGROUND/AIMS: Inadequate description of trial interventions in publications has been repeatedly reported, a problem that extends to the description of placebo controls. Without describing placebo contents, it cannot be assumed that a placebo is inert. Pharmacologically active placebos complicate accurate estimation and interpretation of efficacy and safety data. In this study, we sought to assess whether placebo contents are described in study protocols and publications of trials published in high-impact medical journals. METHODS: We identified all placebo-controlled randomized clinical trials (RCTs) published in 2016 in Annals of Internal Medicine, The BMJ, the Journal of the American Medical Association (JAMA), The Lancet, and the New England Journal of Medicine (NEJM). We included all trials with publicly available study protocols. From journal publications and associated study protocols, we searched and recorded: description of placebo contents; the amount of each placebo ingredient; and investigators' stated rationale for selection of placebo ingredients. RESULTS: We included 113 placebo-controlled RCTs. Of the 113 trials, placebo content was described in 22 (19.5%) journal publications and 51 (45.1%) study protocols. The amount of each placebo ingredient was described in 15 (13.3%) journal publications and 47 (41.6%) study protocols. None of the journal publications explained the rationale for the choice of placebo ingredients, whereas a rationale was provided in 4 (3.5%) study protocols. The stated rationales were to ensure the placebo was visually indistinguishable from the experimental intervention (N = 3) and ensure comparability with a previous study (N = 1). CONCLUSION: There is no accessible record of the composition of placebos for approximately half of high-impact RCTs, even with access to study protocols. This impedes reproducibility and raises unanswerable questions about what effects-beneficial or harmful-the placebo may have had on trial participants, potentially confounding an accurate assessment of the experimental intervention's safety and efficacy. Considering that study protocols are unabridged, detailed documents describing the trial design and methodology, the fact that less than half of the study protocols described the placebo contents raises concerns about clinical trial transparency. To improve the reproducibility and potential of placebo-controlled RCTs to provide reliable evidence on the efficacy and safety profile of drugs and other experimental interventions, more detail regarding placebo contents must be included in trial documents.
Subject(s)
Journal Impact Factor , Periodicals as Topic , United States , Humans , Cross-Sectional Studies , Randomized Controlled Trials as TopicABSTRACT
OBJECTIVE: This study examined the extent to which trials presented at major international medical conferences in 2016 consistently reported their study design, end points and results across conference abstracts, published article abstracts and press releases. DESIGN: Cross-sectional analysis of clinical trials presented at 12 major medical conferences in the USA in 2016. Conferences were identified from a list of the largest clinical research meetings aggregated by the Healthcare Convention and Exhibitors Association and were included if their abstracts were publicly available. From these conferences, all late-breaker clinical trials were included, as well as a random selection of all other clinical trials, such that the total sample included up to 25 trial abstracts per conference. MAIN OUTCOME MEASURES: First, it was determined if trials were registered and reported results in an International Committee of Medical Journal Editors-approved clinical trial registry. Second, it was determined if trial results were published in a peer-reviewed journal. Finally, information on trial media coverage and press releases was collected using LexisNexis. For all published trials, the consistency of reporting of the following characteristics was examined, through comparison of the trials' conference and publication abstracts: primary efficacy endpoint definition, safety endpoint identification, sample size, follow-up period, primary end point effect size and characterisation of trial results. For all published abstracts with press releases, the characterisation of trial results across conference abstracts, press releases and publications was compared. Authors determined consistency of reporting when identical information was presented across abstracts and press releases. Primary analyses were descriptive; secondary analyses included χ2 tests and multiple logistic regression. RESULTS: Among 240 clinical trials presented at 12 major medical conferences, 208 (86.7%) were registered, 95 (39.6%) reported summary results in a registry and 177 (73.8%) were published; 82 (34.2%) were covered by the media and 68 (28.3%) had press releases. Among the 177 published trials, 171 (96.6%) reported the definition of primary efficacy endpoints consistently across conference and publication abstracts, whereas 96/128 (75.0%) consistently identified safety endpoints. There were 107/172 (62.2%) trials with consistent sample sizes across conference and publication abstracts, 101/137 (73.7%) that reported their follow-up periods consistently, 92/175 (52.6%) that described their effect sizes consistently and 157/175 (89.7%) that characterised their results consistently. Among the trials that were published and had press releases, 32/32 (100%) characterised their results consistently across conference abstracts, press releases and publication abstracts. No trial characteristics were associated with reporting primary efficacy end points consistently. CONCLUSIONS: For clinical trials presented at major medical conferences, primary efficacy endpoint definitions were consistently reported and results were consistently characterised across conference abstracts, registry entries and publication abstracts; consistency rates were lower for sample sizes, follow-up periods, and effect size estimates. REGISTRATION: This study was registered at the Open Science Framework (https://doi.org/10.17605/OSF.IO/VGXZY).
Subject(s)
Research Design , Research Report , Humans , Cross-Sectional Studies , Logistic Models , Sample Size , Health Services Research , Evidence-Based PracticeABSTRACT
Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher's policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.
Subject(s)
Information Dissemination , Publications , Data Analysis , Humans , Meta-Analysis as Topic , Research Personnel , Systematic Reviews as TopicABSTRACT
ABSTRACT: Owing to its rapid development, short-term and long-term effects of the COVID-19 vaccine are still not well understood. This case report highlights bilateral corneal endothelial graft rejection after administration of the Pfizer COVID-19 vaccine. A 73-year-old woman with bilateral Descemet stripping endothelial keratoplasty presented with bilateral decreased visual acuity, ocular pain, and photophobia after her second dose of the Pfizer-BioNTech COVID-19 vaccine. Two weeks after vaccine administration, the uncorrected visual acuity was 20/70 and 20/40. Central corneal thickness as measured by ultrasound was 809 and 825 µm and by Scheimfplug imaging was 788 and 751 µm at the pupil center. Slit-lamp biomicroscopy revealed quiet conjunctiva and sclera but was significant for thickened corneas with Descemet folds in both eyes. The patient was instructed to use prednisolone acetate 1% every 1 to 2 hours with Muro ointment at bedtime.
Subject(s)
COVID-19 , Corneal Diseases , Descemet Stripping Endothelial Keratoplasty , Aged , COVID-19 Vaccines , Corneal Diseases/surgery , Descemet Membrane , Endothelium, Corneal , Female , Graft Rejection/prevention & control , Humans , Retrospective Studies , SARS-CoV-2Subject(s)
Drug Compounding/ethics , Papillomavirus Vaccines/adverse effects , Societies, Scientific/ethics , Vaccines, Combined/adverse effects , Aluminum Hydroxide/administration & dosage , Aluminum Hydroxide/adverse effects , Clinical Trials as Topic/ethics , Drug Compounding/statistics & numerical data , Humans , Journalism, Medical/standards , Papillomavirus Vaccines/administration & dosage , Papillomavirus Vaccines/immunology , Phosphates/administration & dosage , Phosphates/adverse effects , Placebos/administration & dosage , Safety , Vaccines, Combined/administration & dosage , Vaccines, Combined/immunologySubject(s)
Clinical Trials as Topic , Control Groups , Data Accuracy , Human Papillomavirus Recombinant Vaccine Quadrivalent, Types 6, 11, 16, 18/administration & dosage , Papillomavirus Infections/prevention & control , Placebos/analysis , Adjuvants, Immunologic/administration & dosage , Adjuvants, Immunologic/chemistry , Humans , Public Reporting of Healthcare DataABSTRACT
Sharing data and code are important components of reproducible research. Data sharing in research is widely discussed in the literature; however, there are no well-established evidence-based incentives that reward data sharing, nor randomized studies that demonstrate the effectiveness of data sharing policies at increasing data sharing. A simple incentive, such as an Open Data Badge, might provide the change needed to increase data sharing in health and medical research. This study was a parallel group randomized controlled trial (protocol registration: doi:10.17605/OSF.IO/PXWZQ) with two groups, control and intervention, with 80 research articles published in BMJ Open per group, with a total of 160 research articles. The intervention group received an email offer for an Open Data Badge if they shared their data along with their final publication and the control group received an email with no offer of a badge if they shared their data with their final publication. The primary outcome was the data sharing rate. Badges did not noticeably motivate researchers who published in BMJ Open to share their data; the odds of awarding badges were nearly equal in the intervention and control groups (odds ratio = 0.9, 95% CI [0.1, 9.0]). Data sharing rates were low in both groups, with just two datasets shared in each of the intervention and control groups. The global movement towards open science has made significant gains with the development of numerous data sharing policies and tools. What remains to be established is an effective incentive that motivates researchers to take up such tools to share their data.
ABSTRACT
Background: Reproducible research includes sharing data and code. The reproducibility policy at the journal Biostatistics rewards articles with badges for data and code sharing. This study investigates the effect of badges at increasing reproducible research, specifically, data and code sharing, at Biostatistics. Methods: The setting of this observational study is the Biostatistics and Statistics in Medicine (control journal) online research archives. The data consisted of 240 randomly sampled articles from 2006 to 2013 (30 articles per year) per journal, a total sample of 480 articles. Data analyses included: plotting probability of data and code sharing by article submission date, and Bayesian logistic regression modelling to test for a difference in the probability of making data and code available after the introduction of badges at Biostatistics. Results: The probability of data sharing was higher at Biostatistics than the control journal but the probability of code sharing was comparable for both journals. The probability of data sharing increased by 3.5 times (95% credible interval: 1.4 to 7.4 times, p-value probability that sharing increased: 0.996) after badges were introduced at Biostatistics. On an absolute scale, however, this difference was only a 7.3% increase in data sharing (95% CI: 2 to 14%, p-value: 0.996). Badges did not have an impact on code sharing at the journal (mean increase: 1.1 times, 95% credible interval: 0.45 to 2.14 times, p-value probability that sharing increased: 0.549). Conclusions: The effect of badges at Biostatistics was a 7.3% increase in the data sharing rate, 5 times less than the effect of badges on data sharing at Psychological Science (37.9% badge effect). Though the effect of badges at Biostatistics did not impact code sharing, and was associated with only a moderate effect on data sharing, badges are an interesting step that journals are taking to incentivise and promote reproducible research.
ABSTRACT
[This corrects the article DOI: 10.1186/s41073-017-0028-9.].
ABSTRACT
BACKGROUND: The foundation of health and medical research is data. Data sharing facilitates the progress of research and strengthens science. Data sharing in research is widely discussed in the literature; however, there are seemingly no evidence-based incentives that promote data sharing. METHODS: A systematic review (registration: 10.17605/OSF.IO/6PZ5E) of the health and medical research literature was used to uncover any evidence-based incentives, with pre- and post-empirical data that examined data sharing rates. We were also interested in quantifying and classifying the number of opinion pieces on the importance of incentives, the number observational studies that analysed data sharing rates and practices, and strategies aimed at increasing data sharing rates. RESULTS: Only one incentive (using open data badges) has been tested in health and medical research that examined data sharing rates. The number of opinion pieces (n = 85) out-weighed the number of article-testing strategies (n = 76), and the number of observational studies exceeded them both (n = 106). CONCLUSIONS: Given that data is the foundation of evidence-based health and medical research, it is paradoxical that there is only one evidence-based incentive to promote data sharing. More well-designed studies are needed in order to increase the currently low rates of data sharing.
ABSTRACT
OBJECTIVE: To quantify data sharing trends and data sharing policy compliance at the British Medical Journal (BMJ) by analysing the rate of data sharing practices, and investigate attitudes and examine barriers towards data sharing. DESIGN: Observational study. SETTING: The BMJ research archive. PARTICIPANTS: 160 randomly sampled BMJ research articles from 2009 to 2015, excluding meta-analysis and systematic reviews. MAIN OUTCOME MEASURES: Percentages of research articles that indicated the availability of their raw data sets in their data sharing statements, and those that easily made their data sets available on request. RESULTS: 3 articles contained the data in the article. 50 out of 157 (32%) remaining articles indicated the availability of their data sets. 12 used publicly available data and the remaining 38 were sent email requests to access their data sets. Only 1 publicly available data set could be accessed and only 6 out of 38 shared their data via email. So only 7/157 research articles shared their data sets, 4.5% (95% CI 1.8% to 9%). For 21 clinical trials bound by the BMJ data sharing policy, the per cent shared was 24% (8% to 47%). CONCLUSIONS: Despite the BMJ's strong data sharing policy, sharing rates are low. Possible explanations for low data sharing rates could be: the wording of the BMJ data sharing policy, which leaves room for individual interpretation and possible loopholes; that our email requests ended up in researchers spam folders; and that researchers are not rewarded for sharing their data. It might be time for a more effective data sharing policy and better incentives for health and medical researchers to share their data.