Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 559
Filtrar
1.
Trials ; 23(1): 601, 2022 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-35897110

RESUMEN

BACKGROUND: To assess the quality of reporting of RCT protocols approved by UK research ethics committees before and after the publication of the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. METHODS: We had access to RCT study protocols that received ethical approval in the UK in 2012 (n=103) and 2016 (n=108). From those, we assessed the adherence to the 33 SPIRIT items (i.e. a total of 64 components of the 33 SPIRIT items). We descriptively analysed the adherence to SPIRIT guidelines as proportion of adequately reported items (median and interquartile range [IQR]) and stratified the results by year of approval and sponsor. RESULTS: The proportion of reported SPIRIT items increased from a median of 64.9% (IQR, 57.6-69.2%) in 2012 to a median of 72.5% (IQR, 65.3-78.3%) in 2016. Industry-sponsored RCTs reported more SPIRIT items in 2012 (median 67.4%; IQR, 64.1-69.4%) compared to non-industry-sponsored trials (median 59.8%; IQR, 46.5-67.7%). This gap between industry- and non-industry-sponsored trials increased in 2016 (industry-sponsored: median 75.6%; IQR, 71.2-79.0% vs non-industry-sponsored: median 65.3%; IQR, 51.6-76.3%). CONCLUSIONS: The adherence to SPIRIT guidelines has improved in the UK from 2012 to 2016 but remains on a modest level, especially for non-industry-sponsored RCTs.


Asunto(s)
Protocolos de Ensayos Clínicos como Asunto , Comités de Ética en Investigación , Adhesión a Directriz , Humanos , Reino Unido
2.
Eur Heart J Qual Care Clin Outcomes ; 8(3): 324-332, 2022 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-33502466

RESUMEN

AIMS: Using bilateral internal thoracic arteries (BITAs) for coronary artery bypass grafting (CABG) has been suggested to improve survival compared to CABG using single internal thoracic arteries (SITAs) for patients with advanced coronary artery disease. We used data from the Arterial Revascularization Trial (ART) to assess long-term cost-effectiveness of BITA grafting compared to SITA grafting from an English health system perspective. METHODS AND RESULTS: Resource use, healthcare costs, and quality-adjusted life years (QALYs) were assessed across 10 years of follow-up from an intention-to-treat perspective. Missing data were addressed using multiple imputation. Incremental cost-effectiveness ratios were calculated with uncertainty characterized using non-parametric bootstrapping. Results were extrapolated beyond 10 years using Gompertz functions for survival and linear models for total cost and utility. Total mean costs at 10 years of follow-up were £17 594 in the BITA arm and £16 462 in the SITA arm [mean difference £1133 95% confidence interval (CI) £239 to £2026, P = 0.015]. Total mean QALYs at 10 years were 6.54 in the BITA arm and 6.57 in the SITA arm (adjusted mean difference -0.01 95% CI -0.2 to 0.1, P = 0.883). At 10 years, BITA grafting had a 33% probability of being cost-effective compared to SITA, assuming a cost-effectiveness threshold of £20 000. Lifetime extrapolation increased the probability of BITA being cost-effective to 51%. CONCLUSIONS: BITA grafting has significantly higher costs but similar quality-adjusted survival at 10 years compared to SITA grafting. Extrapolation suggests this could change over lifetime.


Asunto(s)
Enfermedad de la Arteria Coronaria , Arterias Mamarias , Puente de Arteria Coronaria/métodos , Enfermedad de la Arteria Coronaria/cirugía , Análisis Costo-Beneficio , Humanos , Arterias Mamarias/trasplante , Resultado del Tratamiento
3.
Br J Sports Med ; 55(18): 1009-1017, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33514558

RESUMEN

Misuse of statistics in medical and sports science research is common and may lead to detrimental consequences to healthcare. Many authors, editors and peer reviewers of medical papers will not have expert knowledge of statistics or may be unconvinced about the importance of applying correct statistics in medical research. Although there are guidelines on reporting statistics in medical papers, a checklist on the more general and commonly seen aspects of statistics to assess when peer-reviewing an article is needed. In this article, we propose a CHecklist for statistical Assessment of Medical Papers (CHAMP) comprising 30 items related to the design and conduct, data analysis, reporting and presentation, and interpretation of a research paper. While CHAMP is primarily aimed at editors and peer reviewers during the statistical assessment of a medical paper, we believe it will serve as a useful reference to improve authors' and readers' practice in their use of statistics in medical research. We strongly encourage editors and peer reviewers to consult CHAMP when assessing manuscripts for potential publication. Authors also may apply CHAMP to ensure the validity of their statistical approach and reporting of medical research, and readers may consider using CHAMP to enhance their statistical assessment of a paper.


Asunto(s)
Investigación Biomédica , Lista de Verificación , Proyectos de Investigación , Estadística como Asunto , Atención a la Salud , Humanos , Revisión de la Investigación por Pares , Medicina Deportiva/estadística & datos numéricos , Estadística como Asunto/normas
5.
Glob Epidemiol ; 3: 100045, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37635723

RESUMEN

Introduced in 1983, Bland-Altman methods is now considered the standard approach for assessment of agreement between two methods of measurement. The method is widely used by researchers in various disciplines so that the Bland-Altman 1986 Lancet paper has been named as the 29th mostly highly cited paper ever, over all fields. However, two papers by Hopkins (2004) and Krouwer (2007) questioned the validity of the Bland-Altman analysis. We review the points of critical papers and provide responses to them. The discussions in the critical papers of the Bland-Altman method are scientifically delusive. Hopkins misused the Bland-Altman methodology for research question of model validation and also incorrectly used least-square regression when there is measurement error in the predictor. The problem with Krouwers' paper is making sweeping generalisation of a very narrow and somewhat unrealistic situation. The method proposed by Bland and Altman should be used when the research question is method comparison.

6.
J Clin Epidemiol ; 127: 96-104, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32712175

RESUMEN

OBJECTIVES: Over 400 reporting guidelines are currently published, but the frequency of their use by authors to accurately and transparently report research remains unclear. This study examined citation counts of reporting guidelines and characteristics contributing to their citation impact. STUDY DESIGN AND SETTING: Web of Science database was searched for citation counts of all reporting guidelines with a minimum citation age of 5 years. The total citation impact, mean citation impact and the factors contributing to 2- and 5-year citation rate were established. RESULTS: The search identified 296 articles of reporting guidelines from 1995 to 2013. The mean citations per year was 32.4 (95% confidence interval, 22.3-42.4 citations). The factors associated with 2- and 5-year citation performance of reporting guidelines included the following: open access to the reporting guideline, field of the publishing journal (general vs. specialized medical journal), impact factor of the publishing journal, simultaneous publication in multiple journals, and a male first author. CONCLUSION: The citation rate across reporting guidelines varied with journal impact factor, open access publication, field of the publishing journal, simultaneous publications, and a male first author. Gaps in citations highlight opportunities to increase visibility and encourage author use of reporting guidelines.


Asunto(s)
Bibliometría , Guías como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Informe de Investigación/normas , Autoria , Intervalos de Confianza , Análisis de Datos , Factor de Impacto de la Revista , Publicaciones Periódicas como Asunto/estadística & datos numéricos , Factores Sexuales , Factores de Tiempo
7.
Trials ; 21(1): 528, 2020 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-32546273

RESUMEN

Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites."To maximise the benefit to society, you need to not just do research but do it well" Douglas G Altman.


Asunto(s)
Lista de Verificación/normas , Consenso , Edición/normas , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Proyectos de Investigación/normas , Técnica Delphi , Guías como Asunto , Humanos , Publicaciones Periódicas como Asunto , Control de Calidad , Reproducibilidad de los Resultados
8.
BMJ ; 369: m115, 2020 06 17.
Artículo en Inglés | MEDLINE | ID: mdl-32554564

RESUMEN

Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.


Asunto(s)
Lista de Verificación , Consenso , Edición/normas , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Proyectos de Investigación/normas , Lista de Verificación/normas , Técnica Delphi , Guías como Asunto , Humanos , Publicaciones Periódicas como Asunto , Control de Calidad , Reproducibilidad de los Resultados
9.
Ann Intern Med ; 2020 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-32479165

RESUMEN

Clear and informative reporting in titles and abstracts is essential to help readers and reviewers identify potentially relevant studies and decide whether to read the full text. Although the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement provides general recommendations for reporting titles and abstracts, more detailed guidance seems to be desirable. The authors present TRIPOD for Abstracts, a checklist and corresponding guidance for reporting prediction model studies in abstracts. A list of 32 potentially relevant items was the starting point for a modified Delphi procedure involving 110 experts, of whom 71 (65%) participated in the web-based survey. After 2 Delphi rounds, the experts agreed on 21 items as being essential to report in abstracts of prediction model studies. This number was reduced by merging some of the items. In a third round, participants provided feedback on a draft version of TRIPOD for Abstracts. The final checklist contains 12 items and applies to journal and conference abstracts that describe the development or external validation of a diagnostic or prognostic prediction model, or the added value of predictors to an existing model, regardless of the clinical domain or statistical approach used.

10.
J Clin Epidemiol ; 122: 87-94, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32184126

RESUMEN

OBJECTIVES: Appropriate use of reporting guidelines of health research ensures that articles present readers with a consistent representation of study relevance, methodology, and results. This study evaluated the use of major reporting guidelines. STUDY DESIGN AND SETTING: A cross-sectional analysis of health research articles citing four major reporting guidelines indexed in the Web of Science Core Collection (up to June 24, 2018). Two independent reviews were performed in a random sample of 200 articles, including clinical trials (N = 50), economic evaluations (N = 50), systematic reviews (N = 50), and animal research studies (N = 50). The use of reporting guidelines to guide the reporting of research studies was considered appropriate. Inappropriate uses included the use of the reporting guidelines as a tool to assess the methodological quality of studies or as a guideline on how to design and conduct the studies. RESULTS: Across all selected reporting guidelines, appropriate use of reporting guidelines was observed in only 39% (95% CI: 32-46%; 78/200) of articles. By contrast, inappropriate use was observed in 41% (95% CI: 34-48%; 82/200), and unclear/other use was observed in 20% (95% CI: 15-26%; 40/200). CONCLUSIONS: Reporting guidelines of health research studies are frequently used inappropriately. Authors may require further education around appropriate use of the reporting guidelines in research reporting.


Asunto(s)
Investigación Biomédica/estadística & datos numéricos , Investigación Biomédica/normas , Guías como Asunto , Informe de Investigación/normas , Estudios Transversales , Humanos
11.
Int J Epidemiol ; 49(3): 968-978, 2020 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-32176282

RESUMEN

BACKGROUND: It is unclear how multiple treatment comparisons are managed in the analysis of multi-arm trials, particularly related to reducing type I (false positive) and type II errors (false negative). METHODS: We conducted a cohort study of clinical-trial protocols that were approved by research ethics committees in the UK, Switzerland, Germany and Canada in 2012. We examined the use of multiple-testing procedures to control the overall type I error rate. We created a decision tool to determine the need for multiple-testing procedures. We compared the result of the decision tool to the analysis plan in the protocol. We also compared the pre-specified analysis plans in trial protocols to their publications. RESULTS: Sixty-four protocols for multi-arm trials were identified, of which 50 involved multiple testing. Nine of 50 trials (18%) used a single-step multiple-testing procedures such as a Bonferroni correction and 17 (38%) used an ordered sequence of primary comparisons to control the overall type I error. Based on our decision tool, 45 of 50 protocols (90%) required use of a multiple-testing procedure but only 28 of the 45 (62%) accounted for multiplicity in their analysis or provided a rationale if no multiple-testing procedure was used. We identified 32 protocol-publication pairs, of which 8 planned a global-comparison test and 20 planned a multiple-testing procedure in their trial protocol. However, four of these eight trials (50%) did not use the global-comparison test. Likewise, 3 of the 20 trials (15%) did not perform the multiple-testing procedure in the publication. The sample size of our study was small and we did not have access to statistical-analysis plans for the included trials in our study. CONCLUSIONS: Strategies to reduce type I and type II errors are inconsistently employed in multi-arm trials. Important analytical differences exist between planned analyses in clinical-trial protocols and subsequent publications, which may suggest selective reporting of analyses.


Asunto(s)
Ensayos Clínicos como Asunto , Ensayos Clínicos como Asunto/métodos , Estudios de Cohortes , Humanos , Análisis Multinivel , Proyectos de Investigación
12.
J Clin Epidemiol ; 117: 52-59, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31585174

RESUMEN

OBJECTIVES: Factorial designs can allow efficient evaluation of multiple treatments within a single trial. We evaluated the design, analysis, and reporting in a sample of factorial trials. STUDY DESIGN AND SETTING: Review of 2 × 2 factorial trials evaluating health-related interventions and outcomes in humans. Using Medline, we identified articles published between January 2015 and March 2018. We randomly selected 100 articles for inclusion. RESULTS: Most trials (78%) did not provide a rationale for using a factorial design. Only 63 trials (63%) assessed the interaction for the primary outcome, and 39/63 (62%) made a further assessment for at least one secondary outcome. 12/63 trials (19%) identified a significant interaction for the primary outcome and 16/39 trials (41%) for at least one secondary outcome. Inappropriate methods of analysis to protect against potential negative effects from interactions were common, with 18 trials (18%) choosing the analysis method based on a preliminary test for interaction, and 13% (n = 10/75) of those conducting a factorial analysis including an interaction term in the model. CONCLUSION: Reporting of factorial trials was often suboptimal, and assessment of interactions was poor. Investigators often used inappropriate methods of analysis to try to protect against adverse effects of interactions.


Asunto(s)
Ensayos Clínicos como Asunto/normas , Proyectos de Investigación/normas , Ensayos Clínicos como Asunto/clasificación , Ensayos Clínicos como Asunto/estadística & datos numéricos , Interpretación Estadística de Datos , Humanos
13.
BMJ Open ; 9(12): e031031, 2019 12 09.
Artículo en Inglés | MEDLINE | ID: mdl-31822541

RESUMEN

The purpose of this paper is to help readers choose an appropriate observational study design for measuring an association between an exposure and disease incidence. We discuss cohort studies, sub-samples from cohorts (case-cohort and nested case-control designs), and population-based or hospital-based case-control studies. Appropriate study design is the foundation of a scientifically valid observational study. Mistakes in design are often irremediable. Key steps are understanding the scientific aims of the study and what is required to achieve them. Some designs will not yield the information required to realise the aims. The choice of design also depends on the availability of source populations and resources. Choosing an appropriate design requires balancing the pros and cons of various designs in view of study aims and practical constraints. We compare various cohort and case-control designs to estimate the effect of an exposure on disease incidence and mention how certain design features can reduce threats to study validity.


Asunto(s)
Estudios Observacionales como Asunto/métodos , Proyectos de Investigación , Estudios de Casos y Controles , Estudios de Cohortes , Enfermedad , Humanos , Incidencia , Riesgo
14.
BMC Med ; 17(1): 205, 2019 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-31744489

RESUMEN

BACKGROUND: The peer review process has been questioned as it may fail to allow the publication of high-quality articles. This study aimed to evaluate the accuracy in identifying inadequate reporting in RCT reports by early career researchers (ECRs) using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process. METHODS: We performed a cross-sectional diagnostic study of 119 manuscripts, from BMC series medical journals, BMJ, BMJ Open, and Annals of Emergency Medicine reporting the results of two-arm parallel-group RCTs. One hundred and nineteen ECRs who had never reviewed an RCT manuscript were recruited from December 2017 to January 2018. Each ECR assessed one manuscript. To assess accuracy in identifying inadequate reporting, we used two tests: (1) ECRs assessing a manuscript using the COBPeer tool (after completing an online training module) and (2) the usual peer-review process. The reference standard was the assessment of the manuscript by two systematic reviewers. Inadequate reporting was defined as incomplete reporting or a switch in primary outcome and considered nine domains: the eight most important CONSORT domains and a switch in primary outcome(s). The primary outcome was the mean number of domains accurately classified (scale from 0 to 9). RESULTS: The mean (SD) number of domains (0 to 9) accurately classified per manuscript was 6.39 (1.49) for ECRs using COBPeer versus 5.03 (1.84) for the journal's usual peer-review process, with a mean difference [95% CI] of 1.36 [0.88-1.84] (p < 0.001). Concerning secondary outcomes, the sensitivity of ECRs using COBPeer versus the usual peer-review process in detecting incompletely reported CONSORT items was 86% [95% CI 82-89] versus 20% [16-24] and in identifying a switch in primary outcome 61% [44-77] versus 11% [3-26]. The specificity of ECRs using COBPeer versus the usual process to detect incompletely reported CONSORT domains was 61% [57-65] versus 77% [74-81] and to identify a switch in primary outcome 77% [67-86] versus 98% [92-100]. CONCLUSIONS: Trained ECRs using the COBPeer tool were more likely to detect inadequate reporting in RCTs than the usual peer review processes used by journals. Implementing a two-step peer-review process could help improve the quality of reporting. TRIAL REGISTRATION: Clinical.Trials.gov NCT03119376 (Registered April, 18, 2017).


Asunto(s)
Revisión por Pares/normas , Informe de Investigación/normas , Estudios Transversales , Humanos , Revisión por Pares/métodos , Publicaciones Periódicas como Asunto/normas , Edición/normas
15.
Obes Rev ; 20(11): 1523-1541, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31426126

RESUMEN

Being able to draw accurate conclusions from childhood obesity trials is important to make advances in reversing the obesity epidemic. However, obesity research sometimes is not conducted or reported to appropriate scientific standards. To constructively draw attention to this issue, we present 10 errors that are commonly committed, illustrate each error with examples from the childhood obesity literature, and follow with suggestions on how to avoid these errors. These errors are as follows: using self-reported outcomes and teaching to the test; foregoing control groups and risking regression to the mean creating differences over time; changing the goal posts; ignoring clustering in studies that randomize groups of children; following the forking paths, subsetting, p-hacking, and data dredging; basing conclusions on tests for significant differences from baseline; equating "no statistically significant difference" with "equally effective"; ignoring intervention study results in favor of observational analyses; using one-sided testing for statistical significance; and stating that effects are clinically significant even though they are not statistically significant. We hope that compiling these errors in one article will serve as the beginning of a checklist to support fidelity in conducting, analyzing, and reporting childhood obesity research.


Asunto(s)
Obesidad Infantil/prevención & control , Informe de Investigación/normas , Programas de Reducción de Peso/normas , Investigación Biomédica , Niño , Guías como Asunto , Humanos , Padres/educación , Resultado del Tratamiento
18.
BMJ Open ; 9(4): e025611, 2019 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-31023756

RESUMEN

To promote uniformity in measuring adherence to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement, a reporting guideline for diagnostic and prognostic prediction model studies, and thereby facilitate comparability of future studies assessing its impact, we transformed the original 22 TRIPOD items into an adherence assessment form and defined adherence scoring rules. TRIPOD specific challenges encountered were the existence of different types of prediction model studies and possible combinations of these within publications. More general issues included dealing with multiple reporting elements, reference to information in another publication, and non-applicability of items. We recommend our adherence assessment form to be used by anyone (eg, researchers, reviewers, editors) evaluating adherence to TRIPOD, to make these assessments comparable. In general, when developing a form to assess adherence to a reporting guideline, we recommend formulating specific adherence elements (if needed multiple per reporting guideline item) using unambiguous wording and the consideration of issues of applicability in advance.


Asunto(s)
Técnicas de Apoyo para la Decisión , Adhesión a Directriz/normas , Edición/normas , Proyectos de Investigación/normas , Técnicas y Procedimientos Diagnósticos , Humanos , Modelos Teóricos
19.
JAMA ; 321(16): 1610-1620, 2019 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-31012939

RESUMEN

IMPORTANCE: The quality of reporting of randomized clinical trials is suboptimal. In an era in which the need for greater research transparency is paramount, inadequate reporting hinders assessment of the reliability and validity of trial findings. The Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement was developed to improve the reporting of randomized clinical trials, but the primary focus was on parallel-group trials with 2 groups. Multi-arm trials that use a parallel-group design (comparing treatments by concurrently randomizing participants to one of the treatment groups, usually with equal probability) but have 3 or more groups are relatively common. The quality of reporting of multi-arm trials varies substantially, making judgments and interpretation difficult. While the majority of the elements of the CONSORT 2010 Statement apply equally to multi-arm trials, some elements need adaptation, and, in some cases, additional issues need to be clarified. OBJECTIVE: To present an extension to the CONSORT 2010 Statement for reporting multi-arm trials to facilitate the reporting of such trials. DESIGN: A guideline writing group, which included all authors, formed following the CONSORT group meeting in 2014. The authors met in person and by teleconference bimonthly between 2014 and 2018 to develop and revise the checklist and the accompanying text, with additional discussions by email. A draft manuscript was circulated to the wider CONSORT group of 36 individuals, plus 5 other selected individuals known for their specialist knowledge in clinical trials, for review. Extensive feedback was received from 14 individuals and, after detailed consideration of their comments, a final revised version of the extension was prepared. FINDINGS: This CONSORT extension for multi-arm trials expands on 10 items of the CONSORT 2010 checklist and provides examples of good reporting and a rationale for the importance of each extension item. Key recommendations are that multi-arm trials should be identified as such and require clear objectives and hypotheses referring to all of the treatment groups. Primary treatment comparisons should be identified and authors should report the planned and unplanned comparisons resulting from multiple groups completely and transparently. If statistical adjustments for multiplicity are applied, the rationale and method used should be described. CONCLUSIONS AND RELEVANCE: This extension of the CONSORT 2010 Statement provides specific guidance for the reporting of multi-arm parallel-group randomized clinical trials and should help provide greater transparency and accuracy in the reporting of such trials.


Asunto(s)
Edición/normas , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Lista de Verificación , Humanos , Publicaciones Periódicas como Asunto/normas
20.
PLoS Med ; 16(2): e1002742, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30789892

RESUMEN

BACKGROUND: To our knowledge, no publication providing overarching guidance on the conduct of systematic reviews of observational studies of etiology exists. METHODS AND FINDINGS: Conducting Systematic Reviews and Meta-Analyses of Observational Studies of Etiology (COSMOS-E) provides guidance on all steps in systematic reviews of observational studies of etiology, from shaping the research question, defining exposure and outcomes, to assessing the risk of bias and statistical analysis. The writing group included researchers experienced in meta-analyses and observational studies of etiology. Standard peer-review was performed. While the structure of systematic reviews of observational studies on etiology may be similar to that for systematic reviews of randomised controlled trials, there are specific tasks within each component that differ. Examples include assessment for confounding, selection bias, and information bias. In systematic reviews of observational studies of etiology, combining studies in meta-analysis may lead to more precise estimates, but such greater precision does not automatically remedy potential bias. Thorough exploration of sources of heterogeneity is key when assessing the validity of estimates and causality. CONCLUSION: As many reviews of observational studies on etiology are being performed, this document may provide researchers with guidance on how to conduct and analyse such reviews.


Asunto(s)
Metaanálisis como Asunto , Estudios Observacionales como Asunto/normas , Revisiones Sistemáticas como Asunto , Humanos , Estudios Observacionales como Asunto/métodos , Sesgo de Selección
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...