Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 138
Filter
1.
J Clin Epidemiol ; 170: 111331, 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38552725

ABSTRACT

OBJECTIVES: To generate a bank of items describing application and interpretation errors that can arise in pairwise meta-analyses in systematic reviews of interventions. STUDY DESIGN AND SETTING: MEDLINE, Embase, and Scopus were searched to identify studies describing types of errors in meta-analyses. Descriptions of errors and supporting quotes were extracted by multiple authors. Errors were reviewed at team meetings to determine if they should be excluded, reworded, or combined with other errors, and were categorized into broad categories of errors and subcategories within. RESULTS: Fifty articles met our inclusion criteria, leading to the identification of 139 errors. We identified 25 errors covering data extraction/manipulation, 74 covering statistical analyses, and 40 covering interpretation. Many of the statistical analysis errors related to the meta-analysis model (eg, using a two-stage strategy to determine whether to select a fixed or random-effects model) and statistical heterogeneity (eg, not undertaking an assessment for statistical heterogeneity). CONCLUSION: We generated a comprehensive bank of possible errors that can arise in the application and interpretation of meta-analyses in systematic reviews of interventions. This item bank of errors provides the foundation for developing a checklist to help peer reviewers detect statistical errors.

2.
Res Synth Methods ; 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38494429

ABSTRACT

BACKGROUND: Interrupted time series (ITS) studies contribute importantly to systematic reviews of population-level interventions. We aimed to develop and validate search filters to retrieve ITS studies in MEDLINE and PubMed. METHODS: A total of 1017 known ITS studies (published 2013-2017) were analysed using text mining to generate candidate terms. A control set of 1398 time-series studies were used to select differentiating terms. Various combinations of candidate terms were iteratively tested to generate three search filters. An independent set of 700 ITS studies was used to validate the filters' sensitivities. The filters were test-run in Ovid MEDLINE and the records randomly screened for ITS studies to determine their precision. Finally, all MEDLINE filters were translated to PubMed format and their sensitivities in PubMed were estimated. RESULTS: Three search filters were created in MEDLINE: a precision-maximising filter with high precision (78%; 95% CI 74%-82%) but moderate sensitivity (63%; 59%-66%), most appropriate when there are limited resources to screen studies; a sensitivity-and-precision-maximising filter with higher sensitivity (81%; 77%-83%) but lower precision (32%; 28%-36%), providing a balance between expediency and comprehensiveness; and a sensitivity-maximising filter with high sensitivity (88%; 85%-90%) but likely very low precision, useful when combined with specific content terms. Similar sensitivity estimates were found for PubMed versions. CONCLUSION: Our filters strike different balances between comprehensiveness and screening workload and suit different research needs. Retrieval of ITS studies would be improved if authors identified the ITS design in the titles.

3.
BMC Med Res Methodol ; 24(1): 31, 2024 Feb 10.
Article in English | MEDLINE | ID: mdl-38341540

ABSTRACT

BACKGROUND: The Interrupted Time Series (ITS) is a robust design for evaluating public health and policy interventions or exposures when randomisation may be infeasible. Several statistical methods are available for the analysis and meta-analysis of ITS studies. We sought to empirically compare available methods when applied to real-world ITS data. METHODS: We sourced ITS data from published meta-analyses to create an online data repository. Each dataset was re-analysed using two ITS estimation methods. The level- and slope-change effect estimates (and standard errors) were calculated and combined using fixed-effect and four random-effects meta-analysis methods. We examined differences in meta-analytic level- and slope-change estimates, their 95% confidence intervals, p-values, and estimates of heterogeneity across the statistical methods. RESULTS: Of 40 eligible meta-analyses, data from 17 meta-analyses including 282 ITS studies were obtained (predominantly investigating the effects of public health interruptions (88%)) and analysed. We found that on average, the meta-analytic effect estimates, their standard errors and between-study variances were not sensitive to meta-analysis method choice, irrespective of the ITS analysis method. However, across ITS analysis methods, for any given meta-analysis, there could be small to moderate differences in meta-analytic effect estimates, and important differences in the meta-analytic standard errors. Furthermore, the confidence interval widths and p-values for the meta-analytic effect estimates varied depending on the choice of confidence interval method and ITS analysis method. CONCLUSIONS: Our empirical study showed that meta-analysis effect estimates, their standard errors, confidence interval widths and p-values can be affected by statistical method choice. These differences may importantly impact interpretations and conclusions of a meta-analysis and suggest that the statistical methods are not interchangeable in practice.


Subject(s)
Public Health , Humans , Interrupted Time Series Analysis
4.
J Clin Epidemiol ; 169: 111281, 2024 May.
Article in English | MEDLINE | ID: mdl-38364875

ABSTRACT

Meta-analysis is a statistical method used to combine results from multiple studies, providing a quantitative summary of their findings. One of the fundamental decisions in conducting a meta-analysis is choosing an appropriate model to estimate the overall effect size and its CI. In this article, we focus on the common-effect (also referred to as the fixed-effect) model, and in a companion article, the random-effects model. These models are the two prevailing meta-analysis models employed in the literature. In this article, we outline the key assumption underlying the common-effect model, describe different common-effect methods (ie, inverse variance, Peto, and Mantel-Haenszel), and highlight characteristics of the meta-analysis that should be considered when selecting a method. Furthermore, we demonstrate the application of these methods to a dataset. Understanding the common-effect model is important for knowing when to use the model and how to interpret the overall effect size and its CI.


Subject(s)
Meta-Analysis as Topic , Models, Statistical , Humans , Data Interpretation, Statistical
5.
Res Synth Methods ; 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38316613

ABSTRACT

We aimed to explore, in a sample of systematic reviews (SRs) with meta-analyses of the association between food/diet and health-related outcomes, whether systematic reviewers selectively included study effect estimates in meta-analyses when multiple effect estimates were available. We randomly selected SRs of food/diet and health-related outcomes published between January 2018 and June 2019. We selected the first presented meta-analysis in each review (index meta-analysis), and extracted from study reports all study effect estimates that were eligible for inclusion in the meta-analysis. We calculated the Potential Bias Index (PBI) to quantify and test for evidence of selective inclusion. The PBI ranges from 0 to 1; values above or below 0.5 suggest selective inclusion of effect estimates more or less favourable to the intervention, respectively. We also compared the index meta-analytic estimate to the median of a randomly constructed distribution of meta-analytic estimates (i.e., the estimate expected when there is no selective inclusion). Thirty-nine SRs with 312 studies were included. The estimated PBI was 0.49 (95% CI 0.42-0.55), suggesting that the selection of study effect estimates from those reported was consistent with a process of random selection. In addition, the index meta-analytic effect estimates were similar, on average, to what we would expect to see in meta-analyses generated when there was no selective inclusion. Despite this, we recommend that systematic reviewers report the methods used to select effect estimates to include in meta-analyses, which can help readers understand the risk of selective inclusion bias in the SRs.

6.
J Clin Epidemiol ; 166: 111244, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38142761

ABSTRACT

OBJECTIVES: To evaluate the risk of bias due to missing evidence in a sample of published meta-analyses of nutrition research using the Risk Of Bias due to Missing Evidence (ROB-ME) tool and determine inter-rater agreement in assessments. STUDY DESIGN AND SETTING: We assembled a random sample of 42 meta-analyses of nutrition research. Eight assessors were randomly assigned to one of four pairs. Each pair assessed 21 randomly assigned meta-analyses, and each meta-analysis was assessed by two pairs. We calculated raw percentage agreement and chance corrected agreement using Gwet's Agreement Coefficient (AC) in consensus judgments between pairs. RESULTS: Across the eight signaling questions in the ROB-ME tool, raw percentage agreement ranged from 52% to 100%, and Gwet's AC ranged from 0.39 to 0.76. For the risk-of-bias judgment, the raw percentage agreement was 76% (95% confidence interval 60% to 92%) and Gwet's AC was 0.47 (95% confidence interval 0.14 to 0.80). In seven (17%) meta-analyses, either one or both pairs judged the risk of bias due to missing evidence as "low risk". CONCLUSION: Our findings indicated substantial variation in assessments in consensus judgments between pairs for the signaling questions and overall risk-of-bias judgments. More tutorials and training are needed to help researchers apply the ROB-ME tool more consistently.


Subject(s)
Judgment , Research Design , Humans , Bias , Consensus , Publications , Reproducibility of Results , Meta-Analysis as Topic , Publication Bias
8.
Syst Rev ; 12(1): 196, 2023 10 13.
Article in English | MEDLINE | ID: mdl-37833767

ABSTRACT

BACKGROUND: Incomplete reporting about what systematic reviewers did and what they found prevents users of the report from being able to fully interpret the findings and understand the limitations of the underlying evidence. Reporting guidelines such as the PRISMA statement and its extensions are designed to improve reporting. However, there are important inconsistencies across the various PRISMA reporting guidelines, which causes confusion and misinterpretation. Coupled with this, users might need to consult multiple guidelines to gain a full understanding of the guidance. Furthermore, the current passive strategy of implementing PRISMA has not fully brought about needed improvements in the completeness of systematic review reporting. METHODS: The PRISMATIC ('PRISMA, Technology, and Implementation to enhance reporting Completeness') project aims to use novel methods to enable more efficient and effective translation of PRISMA reporting guidelines into practice. We will establish a working group who will develop a unified PRISMA statement that harmonises content across the main PRISMA guideline and several of its extensions. We will then develop a web application that generates a reporting template and checklist customised to the characteristics and methods of a systematic review ('PRISMA-Web app') and conduct a randomised trial to evaluate its impact on authors' reporting. We will also develop a web application that helps peer reviewers appraise systematic review manuscripts ('PRISMA-Peer app') and conduct a diagnostic accuracy study to evaluate its impact on peer reviewers' detection of incomplete reporting. DISCUSSION: We anticipate the novel guidance and web-based apps developed throughout the project will substantively enhance the completeness of reporting of systematic reviews of health evidence, ultimately benefiting users who rely on systematic reviews to inform health care decision-making.


Subject(s)
Checklist , Research Design , Humans , Systematic Reviews as Topic , Quality Control , Peer Review
9.
J Clin Epidemiol ; 163: 79-91, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37778736

ABSTRACT

OBJECTIVES: To examine the characteristics of population, intervention and outcome groups and the extent to which they were completely reported for each synthesis in a sample of systematic reviews (SRs) of interventions. STUDY DESIGN AND SETTING: We coded groups that were intended (or used) for comparisons in 100 randomly sampled SRs of public health and health systems interventions published in 2018 from the Health Evidence and Health Systems Evidence databases. RESULTS: Authors commonly used population, intervention and outcome groups to structure comparisons, but these groups were often incompletely reported. For example, of 41 SRs that identified and/or used intervention groups for comparisons, 29 (71%) identified the groups in their methods description before reporting of the results (e.g., in the Background or Methods), 12 (29%) defined the groups in enough detail to replicate decisions about which included studies were eligible for each synthesis, 6 (15%) provided a rationale, and 24 (59%) stated that the groups would be used for comparisons. Sixteen (39%) SRs used intervention groups in their synthesis without any mention in the methods. Reporting for population, outcome and methodological groups was similarly incomplete. CONCLUSION: Complete reporting of the groups used for synthesis would improve transparency and replicability of reviews, and help ensure that the synthesis is not driven by what is reported in the included studies. Although concerted effort is needed to improve reporting, this should lead to more focused and useful reviews for decision-makers.


Subject(s)
Public Health , Humans , Systematic Reviews as Topic
10.
BMJ ; 383: e075081, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37793693

ABSTRACT

OBJECTIVE: To evaluate lag-response associations and effect modifications of exposure to floods with risks of all cause, cardiovascular, and respiratory mortality on a global scale. DESIGN: Time series study. SETTING: 761 communities in 35 countries or territories with at least one flood event during the study period. PARTICIPANTS: Multi-Country Multi-City Collaborative Research Network database, Australian Cause of Death Unit Record File, New Zealand Integrated Data Infrastructure, and the International Network for the Demographic Evaluation of Populations and their Health Network database. MAIN OUTCOME MEASURES: The main outcome was daily counts of deaths. An estimation for the lag-response association between flood and daily mortality risk was modelled, and the relative risks over the lag period were cumulated to calculate overall effects. Attributable fractions of mortality due to floods were further calculated. A quasi-Poisson model with a distributed lag non-linear function was used to examine how daily death risk was associated with flooded days in each community, and then the community specific associations were pooled using random effects multivariate meta-analyses. Flooded days were defined as days from the start date to the end date of flood events. RESULTS: A total of 47.6 million all cause deaths, 11.1 million cardiovascular deaths, and 4.9 million respiratory deaths were analysed. Over the 761 communities, mortality risks increased and persisted for up to 60 days (50 days for cardiovascular mortality) after a flooded day. The cumulative relative risks for all cause, cardiovascular, and respiratory mortality were 1.021 (95% confidence interval 1.006 to 1.036), 1.026 (1.005 to 1.047), and 1.049 (1.008 to 1.092), respectively. The associations varied across countries or territories and regions. The flood-mortality associations appeared to be modified by climate type and were stronger in low income countries and in populations with a low human development index or high proportion of older people. In communities impacted by flood, up to 0.10% of all cause deaths, 0.18% of cardiovascular deaths, and 0.41% of respiratory deaths were attributed to floods. CONCLUSIONS: This study found that the risks of all cause, cardiovascular, and respiratory mortality increased for up to 60 days after exposure to flood and the associations could vary by local climate type, socioeconomic status, and older age.


Subject(s)
Floods , Respiratory Tract Diseases , Humans , Aged , Time Factors , Australia/epidemiology , Climate , Mortality
11.
Res Synth Methods ; 14(6): 882-902, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37731166

ABSTRACT

Interrupted time series (ITS) are often meta-analysed to inform public health and policy decisions but examination of the statistical methods for ITS analysis and meta-analysis in this context is limited. We simulated meta-analyses of ITS studies with continuous outcome data, analysed the studies using segmented linear regression with two estimation methods [ordinary least squares (OLS) and restricted maximum likelihood (REML)], and meta-analysed the immediate level- and slope-change effect estimates using fixed-effect and (multiple) random-effects meta-analysis methods. Simulation design parameters included varying series length; magnitude of lag-1 autocorrelation; magnitude of level- and slope-changes; number of included studies; and, effect size heterogeneity. All meta-analysis methods yielded unbiased estimates of the interruption effects. All random effects meta-analysis methods yielded coverage close to the nominal level, irrespective of the ITS analysis method used and other design parameters. However, heterogeneity was frequently overestimated in scenarios where the ITS study standard errors were underestimated, which occurred for short series or when the ITS analysis method did not appropriately account for autocorrelation. The performance of meta-analysis methods depends on the design and analysis of the included ITS studies. Although all random effects methods performed well in terms of coverage, irrespective of the ITS analysis method, we recommend the use of effect estimates calculated from ITS methods that adjust for autocorrelation when possible. Doing so will likely to lead to more accurate estimates of the heterogeneity variance.


Subject(s)
Public Health , Interrupted Time Series Analysis , Computer Simulation
12.
Res Synth Methods ; 14(4): 622-638, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37293884

ABSTRACT

Interrupted time series (ITS) studies are frequently used to examine the impact of population-level interventions or exposures. Systematic reviews with meta-analyses including ITS designs may inform public health and policy decision-making. Re-analysis of ITS may be required for inclusion in meta-analysis. While publications of ITS rarely provide raw data for re-analysis, graphs are often included, from which time series data can be digitally extracted. However, the accuracy of effect estimates calculated from data digitally extracted from ITS graphs is currently unknown. Forty-three ITS with available datasets and time series graphs were included. Time series data from each graph was extracted by four researchers using digital data extraction software. Data extraction errors were analysed. Segmented linear regression models were fitted to the extracted and provided datasets, from which estimates of immediate level and slope change (and associated statistics) were calculated and compared across the datasets. Although there were some data extraction errors of time points, primarily due to complications in the original graphs, they did not translate into important differences in estimates of interruption effects (and associated statistics). Using digital data extraction to obtain data from ITS graphs should be considered in reviews including ITS. Including these studies in meta-analyses, even with slight inaccuracy, is likely to outweigh the loss of information from non-inclusion.


Subject(s)
Public Health , Software , Interrupted Time Series Analysis , Time Factors
13.
Cochrane Database Syst Rev ; 5: CD014874, 2023 05 04.
Article in English | MEDLINE | ID: mdl-37146219

ABSTRACT

BACKGROUND: Acceptable, effective and feasible support strategies (interventions) for parents experiencing complex post-traumatic stress disorder (CPTSD) symptoms or with a history of childhood maltreatment may offer an opportunity to support parental recovery, reduce the risk of intergenerational transmission of trauma and improve life-course trajectories for children and future generations. However, evidence relating to the effect of interventions has not been synthesised to provide a comprehensive review of available support strategies. This evidence synthesis is critical to inform further research, practice and policy approaches in this emerging area. OBJECTIVES: To assess the effects of interventions provided to support parents who were experiencing CPTSD symptoms or who had experienced childhood maltreatment (or both), on parenting capacity and parental psychological or socio-emotional wellbeing. SEARCH METHODS: In October 2021 we searched CENTRAL, MEDLINE, Embase, six other databases and two trials registers, together with checking references and contacting experts to identify additional studies. SELECTION CRITERIA: All variants of randomised controlled trials (RCTs) comparing any intervention delivered in the perinatal period designed to support parents experiencing CPTSD symptoms or with a history of childhood maltreatment (or both), to any active or inactive control. Primary outcomes were parental psychological or socio-emotional wellbeing and parenting capacity between pregnancy and up to two years postpartum. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed the eligibility of trials for inclusion, extracted data using a pre-designed data extraction form, and assessed risk of bias and certainty of evidence. We contacted study authors for additional information as required. We analysed continuous data using mean difference (MD) for outcomes using a single measure, and standardised mean difference (SMD) for outcomes using multiple measures, and risk ratios (RR) for dichotomous data. All data are presented with 95% confidence intervals (CIs). We undertook meta-analyses using random-effects models. MAIN RESULTS: We included evidence from 1925 participants in 15 RCTs that investigated the effect of 17 interventions. All included studies were published after 2005. Interventions included seven parenting interventions, eight psychological interventions and two service system approaches. The studies were funded by major research councils, government departments and philanthropic/charitable organisations. All evidence was of low or very low certainty. Parenting interventions Evidence was very uncertain from a study (33 participants) assessing the effects of a parenting intervention compared to attention control on trauma-related symptoms, and psychological wellbeing symptoms (postpartum depression), in mothers who had experienced childhood maltreatment and were experiencing current parenting risk factors. Evidence suggested that parenting interventions may improve parent-child relationships slightly compared to usual service provision (SMD 0.45, 95% CI -0.06 to 0.96; I2 = 60%; 2 studies, 153 participants; low-certainty evidence). There may be little or no difference between parenting interventions and usual perinatal service in parenting skills including nurturance, supportive presence and reciprocity (SMD 0.25, 95% CI -0.07 to 0.58; I2 = 0%; 4 studies, 149 participants; low-certainty evidence). No studies assessed the effects of parenting interventions on parents' substance use, relationship quality or self-harm. Psychological interventions Psychological interventions may result in little or no difference in trauma-related symptoms compared to usual care (SMD -0.05, 95% CI -0.40 to 0.31; I2 = 39%; 4 studies, 247 participants; low-certainty evidence). Psychological interventions may make little or no difference compared to usual care to depression symptom severity (8 studies, 507 participants, low-certainty evidence, SMD -0.34, 95% CI -0.66 to -0.03; I2 = 63%). An interpersonally focused cognitive behavioural analysis system of psychotherapy may slightly increase the number of pregnant women who quit smoking compared to usual smoking cessation therapy and prenatal care (189 participants, low-certainty evidence). A psychological intervention may slightly improve parents' relationship quality compared to usual care (1 study, 67 participants, low-certainty evidence). Benefits for parent-child relationships were very uncertain (26 participants, very low-certainty evidence), while there may be a slight improvement in parenting skills compared to usual care (66 participants, low-certainty evidence). No studies assessed the effects of psychological interventions on parents' self-harm. Service system approaches One service system approach assessed the effect of a financial empowerment education programme, with and without trauma-informed peer support, compared to usual care for parents with low incomes. The interventions increased depression slightly (52 participants, low-certainty evidence). No studies assessed the effects of service system interventions on parents' trauma-related symptoms, substance use, relationship quality, self-harm, parent-child relationships or parenting skills. AUTHORS' CONCLUSIONS: There is currently a lack of high-quality evidence regarding the effectiveness of interventions to improve parenting capacity or parental psychological or socio-emotional wellbeing in parents experiencing CPTSD symptoms or who have experienced childhood maltreatment (or both). This lack of methodological rigour and high risk of bias made it difficult to interpret the findings of this review. Overall, results suggest that parenting interventions may slightly improve parent-child relationships but have a small, unimportant effect on parenting skills. Psychological interventions may help some women stop smoking in pregnancy, and may have small benefits on parents' relationships and parenting skills. A financial empowerment programme may slightly worsen depression symptoms. While potential beneficial effects were small, the importance of a positive effect in a small number of parents must be considered when making treatment and care decisions. There is a need for further high-quality research into effective strategies for this population.


Subject(s)
Stress Disorders, Post-Traumatic , Female , Pregnancy , Humans , Stress Disorders, Post-Traumatic/therapy , Parents/education , Psychotherapy/methods , Mothers/education , Pregnant Women
14.
J Clin Epidemiol ; 156: 42-52, 2023 04.
Article in English | MEDLINE | ID: mdl-36758885

ABSTRACT

OBJECTIVES: To examine the specification and use of summary and statistical synthesis methods, focusing on synthesis methods other than meta-analysis. STUDY DESIGN AND SETTING: We coded the specification and use of summary and synthesis methods in 100 randomly sampled systematic reviews (SRs) of public health and health systems interventions published in 2018 from the Health Evidence and Health Systems Evidence databases. RESULTS: Sixty of the 100 SRs used other synthesis methods for some (27/100) or all syntheses (33/100). Of these, 54/60 used vote counting: three based on direction of effect, 36 on statistical significance, and 15 were unclear. Eight SRs summarized effect estimates (for example, using medians). Seventeen SRs used the term 'narrative synthesis' (or equivalent) without describing methods; in practice 15 of these used vote counting. 58/100 SRs used meta-analysis. In SRs providing a rationale for not proceeding with meta-analysis, the most common reason was due to diversity in study characteristics (33/39). CONCLUSION: Statistical synthesis methods other than meta-analysis are commonly used, but few SRs describe the methods. Improved description of methods is required to allow users to appropriately interpret findings, critique methods used and verify the results. Greater awareness of the serious limitations of vote counting based on statistical significance is required.


Subject(s)
Public Health , Research Design , Humans , Systematic Reviews as Topic
15.
J Clin Epidemiol ; 156: 113-118, 2023 04.
Article in English | MEDLINE | ID: mdl-36736707

ABSTRACT

OBJECTIVES: As part of an effort to develop an extension of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 statement for living systematic reviews (LSRs), we discuss conceptual issues relevant to the reporting of LSRs and highlight a few challenges. METHODS: Discussion of conceptual issues based on a scoping review of the literature and discussions among authors. RESULTS: We first briefly describe aspects of the LSR production process relevant to reporting. The production cycles differ by whether the literature surveillance identifies new evidence and whether newly identified evidence is judged to be consequential. This impacts the timing, content, and format of LSR versions. Second, we discuss four types of information that are specific to the reporting of LSRs: justification for adopting the living mode, LSR specific methods, changes between LSR versions, and LSR updating status. We also discuss the challenge of conveying changes between versions to the reader. Third, we describe two commonly used reporting formats of LSRs: full and partial reports. Although partial reports are easier to produce and publish, they lead to the scattering of information across different versions. Full reports ensure the completeness of reporting. We discuss the implications for the extension of the PRISMA 2020 statement for LSRs. CONCLUSION: We argue that a dynamic publication platform would facilitate complete and timely reporting of LSRs.


Subject(s)
Publishing , Systematic Reviews as Topic , Humans
16.
Article in Portuguese | PAHO-IRIS | ID: phr-56882

ABSTRACT

[RESUMO]. A declaração dos Principais Itens para Relatar Revisões Sistemáticas e Meta-análises (PRISMA), publicada em 2009, foi desenvolvida para ajudar revisores sistemáticos a relatar de forma transparente por que a revisão foi feita, os métodos empregados e o que os autores encontraram. Na última década, os avanços na metodo- logia e terminologia de revisões sistemáticas exigiram a atualização da diretriz. A declaração PRISMA 2020 substitui a declaração de 2009 e inclui novas orientações para relato que refletem os avanços nos métodos para identificar, selecionar, avaliar e sintetizar estudos. A estrutura e apresentação dos itens foram modifi- cadas para facilitar a implementação. Neste artigo, apresentamos a lista de checagem PRISMA 2020 de 27 itens, uma lista de checagem expandida que detalha as recomendações para relato para cada item, a lista de checagem PRISMA 2020 para resumos e os fluxogramas revisados para novas revisões e para atualização de revisões.


[ABSTRACT]. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate imple- mentation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.


[RESUMEN]. La declaración PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), publicada en 2009, se diseñó para ayudar a los autores de revisiones sistemáticas a documentar de manera transparente el porqué de la revisión, qué hicieron los autores y qué encontraron. Durante la última década, ha habido muchos avances en la metodología y terminología de las revisiones sistemáticas, lo que ha requerido una actualización de esta guía. La declaración PRISMA 2020 sustituye a la declaración de 2009 e incluye una nueva guía de presentación de las publicaciones que refleja los avances en los métodos para identificar, seleccionar, evaluar y sintetizar estudios. La estructura y la presentación de los ítems ha sido modificada para facilitar su implementación. En este artículo, presentamos la lista de verificación PRISMA 2020 con 27 ítems, y una lista de verificación ampliada que detalla las recomendaciones en la publicación de cada ítem, la lista de verificación del resumen estructurado PRISMA 2020 y el diagrama de flujo revisado para revisiones sistemáticas.


Subject(s)
Guideline , Systematic Review , Meta-Analysis , Medical Writing , Guideline , Systematic Review , Meta-Analysis , Medical Writing , Guideline , Systematic Review , Meta-Analysis , Medical Writing
17.
BMJ ; 379: e072428, 2022 11 22.
Article in English | MEDLINE | ID: mdl-36414269

ABSTRACT

OBJECTIVES: To examine changes in completeness of reporting and frequency of sharing data, analytical code, and other review materials in systematic reviews over time; and factors associated with these changes. DESIGN: Cross sectional meta-research study. POPULATION: Random sample of 300 systematic reviews with meta-analysis of aggregate data on the effects of a health, social, behavioural, or educational intervention. Reviews were indexed in PubMed, Science Citation Index, Social Sciences Citation Index, Scopus, and Education Collection in November 2020. MAIN OUTCOME MEASURES: The extent of complete reporting and the frequency of sharing review materials in the systematic reviews indexed in 2020 were compared with 110 systematic reviews indexed in February 2014. Associations between completeness of reporting and various factors (eg, self-reported use of reporting guidelines, journal policies on data sharing) were examined by calculating risk ratios and 95% confidence intervals. RESULTS: Several items were reported suboptimally among 300 systematic reviews from 2020, such as a registration record for the review (n=113; 38%), a full search strategy for at least one database (n=214; 71%), methods used to assess risk of bias (n=185; 62%), methods used to prepare data for meta-analysis (n=101; 34%), and source of funding for the review (n=215; 72%). Only a few items not already reported at a high frequency in 2014 were reported more frequently in 2020. No evidence indicated that reviews using a reporting guideline were more completely reported than reviews not using a guideline. Reviews published in 2020 in journals that mandated either data sharing or inclusion of data availability statements were more likely to share their review materials (eg, data, code files) than reviews in journals without such mandates (16/87 (18%) v 4/213 (2%)). CONCLUSION: Incomplete reporting of several recommended items for systematic reviews persists, even in reviews that claim to have followed a reporting guideline. Journal policies on data sharing might encourage sharing of review materials.


Subject(s)
Information Dissemination , Research Design , Humans , Cross-Sectional Studies , PubMed , Systematic Reviews as Topic
18.
BMJ ; 378: e070849, 2022 08 09.
Article in English | MEDLINE | ID: mdl-35944924

ABSTRACT

OBJECTIVE: To develop a reporting guideline for overviews of reviews of healthcare interventions. DESIGN: Development of the preferred reporting items for overviews of reviews (PRIOR) statement. PARTICIPANTS: Core team (seven individuals) led day-to-day operations, and an expert advisory group (three individuals) provided methodological advice. A panel of 100 experts (authors, editors, readers including members of the public or patients) was invited to participate in a modified Delphi exercise. 11 expert panellists (chosen on the basis of expertise, and representing relevant stakeholder groups) were invited to take part in a virtual face-to-face meeting to reach agreement (≥70%) on final checklist items. 21 authors of recently published overviews were invited to pilot test the checklist. SETTING: International consensus. INTERVENTION: Four stage process established by the EQUATOR Network for developing reporting guidelines in health research: project launch (establish a core team and expert advisory group, register intent), evidence reviews (systematic review of published overviews to describe reporting quality, scoping review of methodological guidance and author reported challenges related to undertaking overviews of reviews), modified Delphi exercise (two online Delphi surveys to reach agreement (≥70%) on relevant reporting items followed by a virtual face-to-face meeting), and development of the reporting guideline. RESULTS: From the evidence reviews, we drafted an initial list of 47 potentially relevant reporting items. An international group of 52 experts participated in the first Delphi survey (52% participation rate); agreement was reached for inclusion of 43 (91%) items. 44 experts (85% retention rate) completed the second Delphi survey, which included the four items lacking agreement from the first survey and five new items based on respondent comments. During the second round, agreement was not reached for the inclusion or exclusion of the nine remaining items. 19 individuals (6 core team and 3 expert advisory group members, and 10 expert panellists) attended the virtual face-to-face meeting. Among the nine items discussed, high agreement was reached for the inclusion of three and exclusion of six. Six authors participated in pilot testing, resulting in minor wording changes. The final checklist includes 27 main items (with 19 sub-items) across all stages of an overview of reviews. CONCLUSIONS: PRIOR fills an important gap in reporting guidance for overviews of reviews of healthcare interventions. The checklist, along with rationale and example for each item, provides guidance for authors that will facilitate complete and transparent reporting. This will allow readers to assess the methods used in overviews of reviews of healthcare interventions and understand the trustworthiness and applicability of their findings.


Subject(s)
Checklist , Health Facilities , Consensus , Delivery of Health Care , Delphi Technique , Humans , Research Design , Surveys and Questionnaires
19.
Syst Rev ; 11(1): 148, 2022 07 26.
Article in English | MEDLINE | ID: mdl-35883155

ABSTRACT

BACKGROUND: Aromatherapy - the therapeutic use of essential oils from plants (flowers, herbs or trees) to treat ill health and promote physical, emotional and spiritual well-being - is one of the most widely used natural therapies reported by consumers in Western countries. The Australian Government Department of Health (via the National Health and Medical Research Council) has commissioned a suite of independent evidence evaluations to inform the 2019-20 Review of the Australian Government Rebate on Private Health Insurance for Natural Therapies. This protocol is for one of the evaluations: a systematic review that aims to examine the effectiveness of aromatherapy in preventing and/or treating injury, disease, medical conditions or preclinical conditions. METHODS: Eligibility criteria: randomised trials comparing (1) aromatherapy (delivered by any mode) to no aromatherapy (inactive controls), (2) aromatherapy (delivered by massage) to massage alone or (3) aromatherapy to 'gold standard' treatments. POPULATIONS: any condition, pre-condition, injury or risk factor (excluding healthy participants without clearly identified risk factors). OUTCOMES: any for which aromatherapy is indicated. Searches: Cochrane Central Register of Controlled Trials (CENTRAL), with a supplementary search of PubMed (covering a 6-month lag period for processing records in CENTRAL and records not indexed in MEDLINE), AMED and Emcare. No date, language or geographic limitations will be applied. DATA AND ANALYSIS: screening by two authors, independently (records indexed by Aromatherapy or Oils volatile or aromatherapy in title; all full text) or one author (remaining records) with second author until 80% agreement. Data extraction and risk of bias assessment (ROB 2.0) will be piloted by three authors, then completed by a single author and checked by a second. Comparisons will be based on broad outcome categories (e.g. pain, emotional functioning, sleep disruption) stratified by population subgroups (e.g. chronic pain conditions, cancer, dementia) as defined in the analytic framework for the review. Meta-analysis or other synthesis methods will be used to combine results across studies. GRADE methods will be used to assess certainty of evidence and summarise findings. DISCUSSION: Results of the systematic review will provide a comprehensive and up-to-date synthesis of evidence about the effectiveness of aromatherapy. SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42021268244.


Subject(s)
Aromatherapy , Australia , Humans , Massage , Meta-Analysis as Topic , Systematic Reviews as Topic
20.
Ann Intern Med ; 175(7): 1001-1009, 2022 07.
Article in English | MEDLINE | ID: mdl-35635850

ABSTRACT

BACKGROUND: Automation is a proposed solution for the increasing difficulty of maintaining up-to-date, high-quality health evidence. Evidence assessing the effectiveness of semiautomated data synthesis, such as risk-of-bias (RoB) assessments, is lacking. OBJECTIVE: To determine whether RobotReviewer-assisted RoB assessments are noninferior in accuracy and efficiency to assessments conducted with human effort only. DESIGN: Two-group, parallel, noninferiority, randomized trial. (Monash Research Office Project 11256). SETTING: Health-focused systematic reviews using Covidence. PARTICIPANTS: Systematic reviewers, who had not previously used RobotReviewer, completing Cochrane RoB assessments between February 2018 and May 2020. INTERVENTION: In the intervention group, reviewers received an RoB form prepopulated by RobotReviewer; in the comparison group, reviewers received a blank form. Studies were assigned in a 1:1 ratio via simple randomization to receive RobotReviewer assistance for either Reviewer 1 or Reviewer 2. Participants were blinded to study allocation before starting work on each RoB form. MEASUREMENTS: Co-primary outcomes were the accuracy of individual reviewer RoB assessments and the person-time required to complete individual assessments. Domain-level RoB accuracy was a secondary outcome. RESULTS: Of the 15 recruited review teams, 7 completed the trial (145 included studies). Integration of RobotReviewer resulted in noninferior overall RoB assessment accuracy (risk difference, -0.014 [95% CI, -0.093 to 0.065]; intervention group: 88.8% accurate assessments; control group: 90.2% accurate assessments). Data were inconclusive for the person-time outcome (RobotReviewer saved 1.40 minutes [CI, -5.20 to 2.41 minutes]). LIMITATION: Variability in user behavior and a limited number of assessable reviews led to an imprecise estimate of the time outcome. CONCLUSION: In health-related systematic reviews, RoB assessments conducted with RobotReviewer assistance are noninferior in accuracy to those conducted without RobotReviewer assistance. PRIMARY FUNDING SOURCE: University College London and Monash University.


Subject(s)
Machine Learning , Research Design , Bias , Humans , Risk Assessment
SELECTION OF CITATIONS
SEARCH DETAIL
...