Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
J Clin Epidemiol ; 154: 42-55, 2023 02.
Article in English | MEDLINE | ID: mdl-36375641

ABSTRACT

BACKGROUND AND OBJECTIVES: To identify the similarities and differences in data-sharing policies for clinical trial data that are endorsed by biomedical journals, funding agencies, and other professional organizations. Additionally, to determine the beliefs, and opinions regarding data-sharing policies for clinical trials discussed in articles published in biomedical journals. METHODS: Two searches were conducted, a bibliographic search for published articles that present beliefs, opinions, similarities, and differences regarding policies governing the sharing of clinical trial data. The second search analyzed the gray literature (non-peer-reviewed publications) to identify important data-sharing policies in selected biomedical journals, foundations, funding agencies, and other professional organizations. RESULTS: A total of 471 articles were included after database search and screening, with 45 from the bibliographic search and 426 from the gray literature search. A total of 424 data-sharing policies were included. Fourteen of the 45 published articles from the bibliographic search (31.1%) discussed only advantages specific to data-sharing policies, 27 (27/45; 60%) discussed both advantages and disadvantages, and 4 (4/45; 8.9%) discussed only disadvantages specific. A total of 216 journals (of 270; 80%) specified a data-sharing policy provided by the journal itself. One hundred industry data-sharing policies were included, and 32 (32%) referenced a data-sharing policy on their website. One hundred and thirty-six (42%) organizations (of 327) specified a data-sharing policy. CONCLUSION: We found many similarities listed as advantages to data-sharing and fewer disadvantages were discussed within the literature. Additionally, we found a wide variety of commonalities and differences-such as the lack of standardization between policies, and inadequately addressed details regarding the accessibility of research data-that exist in data-sharing policies endorsed by biomedical journals, funding agencies, and other professional organizations. Our study may not include information on all data sharing policies and our data is limited to the entities' descriptions of each policy.


Subject(s)
Periodicals as Topic , Humans , Publications , Information Dissemination , Policy , Societies
3.
BMC Urol ; 22(1): 102, 2022 Jul 11.
Article in English | MEDLINE | ID: mdl-35820886

ABSTRACT

BACKGROUND: Reproducibility is essential for the integrity of scientific research. Reproducibility is measured by the ability of different investigators to replicate the outcomes of an original publication using the same materials and procedures. Unfortunately, reproducibility is not currently a standard being met by most scientific research. METHODS: For this review, we sampled 300 publications in the field of urology to assess for 14 indicators of reproducibility including material availability, raw data availability, analysis script availability, pre-registration information, links to protocols, and if the publication was available free to the public. Publications were also assessed for statements about conflicts of interest and funding sources. RESULTS: Of the 300 sample publications, 171 contained empirical data available for analysis of reproducibility. Of the 171 articles with empirical data to analyze, 0.58% provided links to protocols, 4.09% provided access to raw data, 3.09% provided access to materials, and 4.68% were pre-registered. None of the studies provided analysis scripts. Our review is cross-sectional in nature, including only PubMed indexed journals-published in English-and within a finite time period. Thus, our results should be interpreted in light of these considerations. CONCLUSION: Current urology research does not consistently provide the components needed to reproduce original studies. Collaborative efforts from investigators and journal editors are needed to improve research quality while minimizing waste and patient risk.


Subject(s)
Urology , Cross-Sectional Studies , Humans , Reproducibility of Results
4.
Clin Breast Cancer ; 22(6): 588-600, 2022 08.
Article in English | MEDLINE | ID: mdl-35676189

ABSTRACT

OBJECTIVE: The aim of this study was to assess the methodological quality and accuracy of reporting within systematic reviews (SRs) that provide evidence to form clinical practice guidelines (CPGs) in the management and treatment of breast cancer. METHODS: The 5 included CPGs for breast cancer management among National Comprehensive Cancer Network and European Society for Medical Oncology were searched for all SRs and meta-analyses. The characteristics of each study along with their methodological reporting were extracted from each SR using the PRISMA (Preferred Reporting Instrument for Systematic Reviews and Meta-Analyses) and AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews 2) tools. Our second objective was to compare SRs produced by Cochrane groups vs non-Cochrane. RESULTS: Our study included 5 CPGs for the management of breast cancer, containing 1341 total references with 69 being unique SRs we analyzed. PRISMA completeness percent had a mean 76.3% (n = 69), while AMSTAR-2 completeness score mean was 66.5% (n = 59). Cochrane SRs were found to adhere far better to PRISMA (0.91 vs. 0.74) and AMSTAR-2 (0.95 vs. 0.62) guidelines compared to the non-Cochrane SRs. CONCLUSION: The reporting quality of SRs that underpin CPGs in breast cancer management widely varies. We recommend that authors of SRs adopt a more uniform approach in assessing the quality of reporting within their studies. In addition, CPGs should use a more standardized method to seek out evidence to establish their recommendations. With improved reporting, clinicians may have increased confidence in CPGs and thus increased utilization of CPGs in clinical decision making.


Subject(s)
Breast Neoplasms , Research Report , Breast Neoplasms/diagnosis , Breast Neoplasms/therapy , Female , Humans , Research Design
5.
Dysphagia ; 37(6): 1576-1585, 2022 12.
Article in English | MEDLINE | ID: mdl-35194671

ABSTRACT

Esophageal motility disorders (EMD) can have significant effects on quality of life. Patient-reported outcomes (PROs) provide valuable insight into the patient's perspective on their treatment and are becoming increasingly used in randomized controlled trials (RCTs). Thus, our investigation aims to evaluate the completeness of reporting of PROs in RCTs pertaining to EMDs. We searched MEDLINE, Embase, and Cochrane Central Register of Controlled Trials for published RCTs focused on EMDs. Included RCTs were published between 2006 and 2020, reported a primary outcome related to an EMDs, and listed at least one PRO measure as a primary or secondary outcome. Investigators screened and extracted data in a masked, duplicate fashion. Data extraction was carried out using both the CONSORT-PRO adaptation and Cochrane Collaboration Risk of Bias 2.0 tool. We assessed overall mean percent completion of the CONSORT-PRO adaptation and a bivariate regression analysis was used to assess relationships between trial characteristics and completeness of reporting. The overall mean percent completion of the CONSORT-PRO checklist adaptation was 43.86% (SD = 17.03). RCTs with a primary PRO had a mean completeness of 47.73% (SD = 17.32) and RCTs with a secondary PRO was 35.36% (SD = 13.52). RCTs with a conflict of interest statement were 18.15% (SE = 6.5) more complete (t = 2.79, P = .009) than trials lacking a statement. No additional significant associations between trial characteristics and completeness of reporting were found. PRO reporting completeness in RCTs focused on EMDs was inadequate. We urge EMD researchers to prioritize complete PRO reporting to foster patient-centered research for future RCTs on EMDs.


Subject(s)
Esophageal Motility Disorders , Patient Reported Outcome Measures , Humans , Cross-Sectional Studies , Randomized Controlled Trials as Topic , Checklist
6.
Eur J Obstet Gynecol Reprod Biol ; 269: 24-29, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34954422

ABSTRACT

OBJECTIVE: Reproducibility is a core tenet of scientific research. A reproducible study is one where the results can be recreated by using the same methodology and materials as the original researchers. Unfortunately, reproducibility is not a standard to which the majority of research is currently adherent. METHODS: Our cross-sectional survey evaluated 300 trials in the field of Obstetrics and Gynecology. Our primary objective was to identify nine indicators of reproducibility and transparency. These indicators include availability of data, analysis scripts, pre-registration information, study protocols, funding source, conflict of interest statements and whether or not the study was available via Open Access. RESULTS: Of the 300 trials in our sample, 208 contained empirical data that could be assessed for reproducibility. None of the trials in our sample provided a link to their protocols or provided a statement on availability of materials. None were replication studies. Just 10.58% provided a statement regarding their data availability, while only 5.82% provided a statement on preregistration. 25.85% failed to report the presence or absence of conflicts of interest and 54.08% did not state the origin of their funding. CONCLUSION: In the studies we examined, research in the field of Obstetrics and Gynecology is not consistently reproducible and frequently lacks conflict of interest disclosure. Consequences of this could be far-reaching and include increased research waste, widespread acceptance of misleading results and erroneous conclusions guiding clinical decision-making.


Subject(s)
Gynecology , Obstetrics , Cross-Sectional Studies , Disclosure , Female , Humans , Pregnancy , Reproducibility of Results
7.
Article in English | MEDLINE | ID: mdl-38804666

ABSTRACT

Background: We surveyed addiction journal editorial board members to better understand their opinions towards data-sharing. Methods: Survey items consisted of Likert-type (e.g., one to five scale), multiple-choice, and free-response questions. Journal websites were searched for names and email addresses. Emails were distributed using SurveyMonkey. Descriptive statistics were used to characterize the responses. Results: We received 178 responses (of 1039; 17.1%). Of these, 174 individuals agreed to participate in our study (97.8%). Most respondents did not know whether their journal had a data-sharing policy. Board members "somewhat agree" that addiction journals should recommend but not require data-sharing for submitted manuscripts [M=4.09 (SD=0.06); 95% CI: 3.97-4.22]. Items with the highest perceived benefit ratings were "secondary data use (e.g., meta-analysis)" [M=3.44 (SD=0.06); 95% CI: 3.31-3.56] and "increased transparency" [M=3.29 (SD=0.07); 95% CI: 3.14-3.43]. Items perceived to be the greatest barrier to data-sharing included "lack of metadata standards" [M=3.21 (SD=0.08); 95% CI: 3.06-3.36], "no incentive" [M=3.43 (SD=0.07); 95% CI: 3.30-3.57], "inadequate resources" [M=3.53 (SD=0.05); 95% CI: 3.42-3.63], and "protection of privacy"[M=3.22 (SD=0.07); 95% CI: 3.07-3.36]. Conclusion: Our results suggest addiction journal editorial board members believe data-sharing has a level of importance within the research community. However, most board members are unaware of their journals' data-sharing policies, and most data-sharing should be recommended but not required. Future efforts aimed at better understanding common reservations and benefits towards data-sharing, as well as avenues to optimize data-sharing while minimizing potential risks, are warranted.

8.
West J Emerg Med ; 22(4): 963-971, 2021 Jul 14.
Article in English | MEDLINE | ID: mdl-35353995

ABSTRACT

INTRODUCTION: We aimed to assess the reproducibility of empirical research by determining the availability of components required for replication of a study, including materials, raw data, analysis scripts, protocols, and preregistration. METHODS: We used the National Library of Medicine catalog to identify MEDLINE-indexed emergency medicine (EM) journals. Thirty journals met the inclusion criteria. From January 1, 2014-December 31, 2018, 300 publications were randomly sampled using a PubMed search. Additionally, we included four high-impact general medicine journals, which added 106 publications. Two investigators were blinded for independent extraction. Extracted data included statements regarding the availability of materials, data, analysis scripts, protocols, and registration. RESULTS: After the search, we found 25,473 articles, from which we randomly selected 300. Of the 300, only 287 articles met the inclusion criteria. Additionally, we added 106 publications from high-impact journals of which 77 met the inclusion criteria. Together, 364 publications were included, of which 212 articles contained empirical data to analyze. Of the eligible empirical articles, 2.49%, (95% confidence interval [CI], 0.33% to 4.64%] provided a material statement, 9.91% (95% CI, 5.88% to 13.93%) provided a data statement, 0 provided access to analysis scripts, 25.94% (95% CI, 20.04% to 31.84%) linked the protocol, and 39.15% (95% CI, 32.58% to 45.72%) were preregistered. CONCLUSION: Studies in EM lack indicators required for reproducibility. The majority of studies fail to report factors needed to reproduce research to ensure credibility. Thus, an intervention is required and can be achieved through the collaboration of researchers, peer reviewers, funding agencies, and journals.


Subject(s)
Emergency Medicine , Humans , Publications , Reproducibility of Results
9.
Heart ; 107(2): 120-126, 2021 01.
Article in English | MEDLINE | ID: mdl-32826286

ABSTRACT

OBJECTIVES: It has been suggested that biomedical research is facing a reproducibility issue, yet the extent of reproducible research within the cardiology literature remains unclear. Thus, our main objective was to assess the quality of research published in cardiology journals by assessing for the presence of eight indicators of reproducibility and transparency. METHODS: Using a cross-sectional study design, we conducted an advanced search of the National Library of Medicine catalogue for publications in cardiology journals. We included publications published between 1 January 2014 and 31 December 2019. After the initial list of eligible cardiology publications was generated, we searched for full-text PDF versions using Open Access, Google Scholar and PubMed. Using a pilot-tested Google Form, a random sample of 532 publications were assessed for the presence of eight indicators of reproducibility and transparency. RESULTS: A total of 232 eligible publications were included in our final analysis. The majority of publications (224/232, 96.6%) did not provide access to complete and unmodified data sets, all 229/232 (98.7%) failed to provide step-by-step analysis scripts and 228/232 (98.3%) did not provide access to complete study protocols. CONCLUSIONS: The presentation of studies published in cardiology journals would make reproducing study outcomes challenging, at best. Solutions to increase the reproducibility and transparency of publications in cardiology journals is needed. Moving forward, addressing inadequate sharing of materials, raw data and key methodological details might help to better the landscape of reproducible research within the field.


Subject(s)
Biomedical Research , Cardiology , Publishing/standards , Cross-Sectional Studies
10.
Aesthet Surg J ; 41(6): 707-719, 2021 05 18.
Article in English | MEDLINE | ID: mdl-32530461

ABSTRACT

BACKGROUND: With the increasing number of randomized control trials being conducted and published in plastic surgery, complete reporting of trial information is critical for readers to properly evaluate a trial's methodology and arrive at appropriate conclusions about its merits and applicability to patients. The Template for Intervention Description and Replication (TIDieR) checklist was introduced to address the limited guidance for reporting trial interventions. OBJECTIVES: The authors applied the TIDieR checklist to evaluate the completeness of intervention reporting of randomized control trials in plastic surgery, compare the quality of intervention reporting before and after the guideline was published, and evaluate characteristics associated with TIDieR compliance. METHODS: A PubMed search identified 1 cohort published prior to the release of TIDieR and another published after its release. From the final sample, the TIDieR checklist was applied to intervention descriptions, and relevant study characteristics were extracted in a duplicate, blinded manner. RESULTS: In total, 130 trials were included for analysis. The mean TIDieR score was 6.4 of 12. Five items were reported 90% of the time, and 4 items were reported less than 10% of the time. We found that TIDieR publication did not affect intervention reporting (P = 0.22). CONCLUSIONS: Our study identified areas in which intervention reporting could be improved. The extent of TIDieR adoption by trialists appears to be limited, and greater efforts are needed to disseminate this reporting guideline if widespread uptake is to be expected. Alternately, it may be beneficial to incorporate TIDieR into the more widely recognized Consolidated Standards of Reporting Trials statement.


Subject(s)
Surgery, Plastic , Checklist , Humans , Randomized Controlled Trials as Topic , Research Design , Research Report
11.
Obesity (Silver Spring) ; 29(2): 285-293, 2021 02.
Article in English | MEDLINE | ID: mdl-33340283

ABSTRACT

OBJECTIVE: Randomized controlled trials (RCTs) play a crucial role in the research and advancement of medical treatment. A cross-sectional study design was utilized to analyze the completeness of intervention reporting using the Template for Intervention Description and Replication (TIDieR) checklist and to evaluate factors associated with intervention reporting. A comparison of the completeness of intervention reporting before and after the publication of TIDieR was sought. METHODS: PubMed was searched for RCTs in the top 10 obesity journals per the Google h5-index. After excluding non-RCTs, 300 articles were randomly sampled. After assessing each publication for eligibility, two authors (SLR and DT) extracted data related to intervention reporting from records in an independent, masked fashion. Data were then verified and analyzed. RESULTS: The analysis revealed that the quality of intervention reporting is quite variable. Overall, no statistically significant difference in the quality of intervention reporting before and after the release of TIDieR guidelines was found. In general, obesity research has good intervention reporting in areas such as the mode of delivery, material lists for intervention, and procedure lists. However, four main areas in which obesity researchers can improve reporting quality were determined. These include providing the expertise and background of intervention providers and providing statements regarding the assessment of fidelity of the intervention. CONCLUSIONS: Urgent intervention is warranted to improve the quality of research reporting in obesity research, which is a fundamental component of obesity management. This will likely require a unified approach from researchers, journals, and funding sources.


Subject(s)
Biomedical Research/standards , Obesity , Periodicals as Topic/standards , Randomized Controlled Trials as Topic , Checklist , Cross-Sectional Studies , Humans
13.
Res Integr Peer Rev ; 5: 5, 2020.
Article in English | MEDLINE | ID: mdl-32161667

ABSTRACT

BACKGROUND: The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications. METHODS: The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access. RESULTS: Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis. CONCLUSIONS: Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.

14.
Int J Surg Protoc ; 19: 8-10, 2020.
Article in English | MEDLINE | ID: mdl-32025594

ABSTRACT

BACKGROUND: Randomized controlled trials (RCTs) are critical in developing new therapeutic approaches. Historically, in plastic surgery, RCTs are uncommon as they make up less than 2% of all publications. However there has recently been an increase in RCTs appearing in plastic surgery but the quality of these articles has yet to be assessed. We aim to determine the completeness of intervention reporting in plastic surgery RCTs using the TIDieR checklist. METHODS: A search of Pubmed for RCTs published in the top 10 plastic surgery journals as determined by the Google h5-index will be performed by two investigators. All identified articles will be isolated and a random selection of 300 articles will be screened for inclusion in the study by two different investigators. All types of RCTs will be included in this study. Articles will be excluded if they are nonrandomized, observational, follow-up studies, or secondary analyses. Full exclusion criteria can be found within this protocol. Extracted data includes all 12 points of the TIDieR checklist, journal, intervention type, sample size, and funding source. A complete list of what data will be extracted is listed within this protocol. All data extraction will be performed by two independent investigators. All work will be verified by the two investigators and any discrepancies will be resolved via consensus between investigators or with third party adjudication. DISSEMINATION: We plan to publish this review in a peer-reviewed journal. We may also present this review at local and/or national conferences.

SELECTION OF CITATIONS
SEARCH DETAIL
...