Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
BMJ Open ; 14(6): e071136, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38889936

ABSTRACT

INTRODUCTION: Observational studies are fraught with several biases including reverse causation and residual confounding. Overview of reviews of observational studies (ie, umbrella reviews) synthesise systematic reviews with or without meta-analyses of cross-sectional, case-control and cohort studies, and may also aid in the grading of the credibility of reported associations. The number of published umbrella reviews has been increasing. Recently, a reporting guideline for overviews of reviews of healthcare interventions (Preferred Reporting Items for Overviews of Reviews (PRIOR)) was published, but the field lacks reporting guidelines for umbrella reviews of observational studies. Our aim is to develop a reporting guideline for umbrella reviews on cross-sectional, case-control and cohort studies assessing epidemiological associations. METHODS AND ANALYSIS: We will adhere to established guidance and prepare a PRIOR extension for systematic reviews of cross-sectional, case-control and cohort studies testing epidemiological associations between an exposure and an outcome, namely Preferred Reporting Items for Umbrella Reviews of Cross-sectional, Case-control and Cohort studies (PRIUR-CCC). Step 1 will be the project launch to identify stakeholders. Step 2 will be a literature review of available guidance to conduct umbrella reviews. Step 3 will be an online Delphi study sampling 100 participants among authors and editors of umbrella reviews. Step 4 will encompass the finalisation of PRIUR-CCC statement, including a checklist, a flow diagram, explanation and elaboration document. Deliverables will be (i) identifying stakeholders to involve according to relevant expertise and end-user groups, with an equity, diversity and inclusion lens; (ii) completing a narrative review of methodological guidance on how to conduct umbrella reviews, a narrative review of methodology and reporting in published umbrella reviews and preparing an initial PRIUR-CCC checklist for Delphi study round 1; (iii) preparing a PRIUR-CCC checklist with guidance after Delphi study; (iv) publishing and disseminating PRIUR-CCC statement. ETHICS AND DISSEMINATION: PRIUR-CCC has been approved by The Ottawa Health Science Network Research Ethics Board and has obtained consent (20220639-01H). Participants to step 3 will give informed consent. PRIUR-CCC steps will be published in a peer-reviewed journal and will guide reporting of umbrella reviews on epidemiological associations.


Subject(s)
Guidelines as Topic , Humans , Cross-Sectional Studies , Cohort Studies , Case-Control Studies , Research Design/standards , Systematic Reviews as Topic , Checklist , Observational Studies as Topic
2.
Syst Rev ; 11(1): 206, 2022 09 27.
Article in English | MEDLINE | ID: mdl-36167611

ABSTRACT

BACKGROUND: A systematic review (SR) helps us make sense of a body of research while minimizing bias and is routinely conducted to evaluate intervention effects in a health technology assessment (HTA). In addition to the traditional de novo SR, which combines the results of multiple primary studies, there are alternative review types that use systematic methods and leverage existing SRs, namely updates of SRs and overviews of SRs. This paper shares guidance that can be used to select the most appropriate review type to conduct when evaluating intervention effects in an HTA, with a goal to leverage existing SRs and reduce research waste where possible. PROCESS: We identified key factors and considerations that can inform the process of deciding to conduct one review type over the others to answer a research question and organized them into guidance comprising a summary and a corresponding flowchart. This work consisted of three steps. First, a guidance document was drafted by methodologists from two Canadian HTA agencies based on their experience. Next, the draft guidance was supplemented with a literature review. Lastly, broader feedback from HTA researchers across Canada was sought and incorporated into the final guidance. INSIGHTS: Nine key factors and six considerations were identified to help reviewers select the most appropriate review type to conduct. These fell into one of two categories: the evidentiary needs of the planned review (i.e., to understand the scope, objective, and analytic approach required for the review) and the state of the existing literature (i.e., to know the available literature in terms of its relevance, quality, comprehensiveness, currency, and findings). The accompanying flowchart, which can be used as a decision tool, demonstrates the interdependency between many of the key factors and considerations and aims to balance the potential benefits and challenges of leveraging existing SRs instead of primary study reports. CONCLUSIONS: Selecting the most appropriate review type to conduct when evaluating intervention effects in an HTA requires a myriad of factors to be considered. We hope this guidance adds clarity to the many competing considerations when deciding which review type to conduct and facilitates that decision-making process.


Subject(s)
Evidence-Based Medicine , Technology Assessment, Biomedical , Humans , Biomedical Technology , Canada , Systematic Reviews as Topic , Guidelines as Topic
3.
BMJ ; 378: e070849, 2022 08 09.
Article in English | MEDLINE | ID: mdl-35944924

ABSTRACT

OBJECTIVE: To develop a reporting guideline for overviews of reviews of healthcare interventions. DESIGN: Development of the preferred reporting items for overviews of reviews (PRIOR) statement. PARTICIPANTS: Core team (seven individuals) led day-to-day operations, and an expert advisory group (three individuals) provided methodological advice. A panel of 100 experts (authors, editors, readers including members of the public or patients) was invited to participate in a modified Delphi exercise. 11 expert panellists (chosen on the basis of expertise, and representing relevant stakeholder groups) were invited to take part in a virtual face-to-face meeting to reach agreement (≥70%) on final checklist items. 21 authors of recently published overviews were invited to pilot test the checklist. SETTING: International consensus. INTERVENTION: Four stage process established by the EQUATOR Network for developing reporting guidelines in health research: project launch (establish a core team and expert advisory group, register intent), evidence reviews (systematic review of published overviews to describe reporting quality, scoping review of methodological guidance and author reported challenges related to undertaking overviews of reviews), modified Delphi exercise (two online Delphi surveys to reach agreement (≥70%) on relevant reporting items followed by a virtual face-to-face meeting), and development of the reporting guideline. RESULTS: From the evidence reviews, we drafted an initial list of 47 potentially relevant reporting items. An international group of 52 experts participated in the first Delphi survey (52% participation rate); agreement was reached for inclusion of 43 (91%) items. 44 experts (85% retention rate) completed the second Delphi survey, which included the four items lacking agreement from the first survey and five new items based on respondent comments. During the second round, agreement was not reached for the inclusion or exclusion of the nine remaining items. 19 individuals (6 core team and 3 expert advisory group members, and 10 expert panellists) attended the virtual face-to-face meeting. Among the nine items discussed, high agreement was reached for the inclusion of three and exclusion of six. Six authors participated in pilot testing, resulting in minor wording changes. The final checklist includes 27 main items (with 19 sub-items) across all stages of an overview of reviews. CONCLUSIONS: PRIOR fills an important gap in reporting guidance for overviews of reviews of healthcare interventions. The checklist, along with rationale and example for each item, provides guidance for authors that will facilitate complete and transparent reporting. This will allow readers to assess the methods used in overviews of reviews of healthcare interventions and understand the trustworthiness and applicability of their findings.


Subject(s)
Checklist , Health Facilities , Consensus , Delivery of Health Care , Delphi Technique , Humans , Research Design , Surveys and Questionnaires
4.
J Clin Epidemiol ; 136: 157-167, 2021 08.
Article in English | MEDLINE | ID: mdl-33979663

ABSTRACT

OBJECTIVES: To evaluate the impact of guidance and training on the inter-rater reliability (IRR), inter-consensus reliability (ICR) and evaluator burden of the Risk of Bias (RoB) in Non-randomized Studies (NRS) of Interventions (ROBINS-I) tool, and the RoB instrument for NRS of Exposures (ROB-NRSE). STUDY DESIGN AND SETTING: In a before-and-after study, seven reviewers appraised the RoB using ROBINS-I (n = 44) and ROB-NRSE (n = 44), before and after guidance and training. We used Gwet's AC1 statistic to calculate IRR and ICR. RESULTS: After guidance and training, the IRR and ICR of the overall bias domain of ROBINS-I and ROB-NRSE improved significantly; with many individual domains showing either a significant (IRR and ICR of ROB-NRSE; ICR of ROBINS-I), or nonsignificant improvement (IRR of ROBINS-I). Evaluator burden significantly decreased after guidance and training for ROBINS-I, whereas for ROB-NRSE there was a slight nonsignificant increase. CONCLUSION: Overall, there was benefit for guidance and training for both tools. We highly recommend guidance and training to reviewers prior to RoB assessments and that future research investigate aspects of guidance and training that are most effective.


Subject(s)
Biomedical Research/standards , Epidemiologic Research Design , Observer Variation , Peer Review/standards , Research Design/standards , Research Personnel/education , Adult , Biomedical Research/statistics & numerical data , Canada , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Psychometrics/methods , Reproducibility of Results , Research Design/statistics & numerical data , United Kingdom
5.
Syst Rev ; 9(1): 254, 2020 11 04.
Article in English | MEDLINE | ID: mdl-33148319

ABSTRACT

BACKGROUND: Overviews of reviews (overviews) provide an invaluable resource for healthcare decision-making by combining large volumes of systematic review (SR) data into a single synthesis. The production of high-quality overviews hinges on the availability of practical evidence-based guidance for conduct and reporting. OBJECTIVES: Within the broad purpose of informing the development of a reporting guideline for overviews, we aimed to provide an up-to-date map of existing guidance related to the conduct of overviews, and to identify common challenges that authors face when undertaking overviews. METHODS: We updated a scoping review published in 2016 using the search methods that had produced the highest yield: ongoing reference tracking (2014 to March 2020 in PubMed, Scopus, and Google Scholar), hand-searching conference proceedings and websites, and contacting authors of published overviews. Using a qualitative meta-summary approach, one reviewer extracted, organized, and summarized the guidance and challenges presented within the included documents. A second reviewer verified the data and synthesis. RESULTS: We located 28 new guidance documents, for a total of 77 documents produced by 34 research groups. The new guidance helps to resolve some earlier identified challenges in the production of overviews. Important developments include strengthened guidance on handling primary study overlap at the study selection and analysis stages. Despite marked progress, several areas continue to be hampered by inconsistent or lacking guidance. There is ongoing debate about whether, when, and how supplemental primary studies should be included in overviews. Guidance remains scant on how to extract and use appraisals of quality of the primary studies within the included SRs and how to adapt GRADE methodology to overviews. The challenges that overview authors face are often related to the above-described steps in the process where evidence-based guidance is lacking or conflicting. CONCLUSION: The rising popularity of overviews has been accompanied by a steady accumulation of new, and sometimes conflicting, guidance. While recent guidance has helped to address some of the challenges that overview authors face, areas of uncertainty remain. Practical tools supported by empirical evidence are needed to assist authors with the many methodological decision points that are encountered in the production of overviews.


Subject(s)
Evidence-Based Medicine , Research Design , Hand , Publications
6.
J Clin Epidemiol ; 128: 140-147, 2020 12.
Article in English | MEDLINE | ID: mdl-32987166

ABSTRACT

OBJECTIVE: To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. STUDY DESIGN AND SETTING: A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC1 statistic to calculate the IRR and ICR. To measure the evaluator burden, we assessed the total time taken to apply the tool and reach a consensus. RESULTS: For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). CONCLUSIONS: We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.


Subject(s)
Consensus , Epidemiologic Research Design , Research Personnel/statistics & numerical data , Bias , Cross-Sectional Studies , Humans , Observer Variation , Reproducibility of Results , Risk Assessment
7.
Syst Rev ; 9(1): 12, 2020 01 13.
Article in English | MEDLINE | ID: mdl-31931871

ABSTRACT

BACKGROUND: The Cochrane Bias Methods Group recently developed the "Risk of Bias (ROB) in Non-randomized Studies of Interventions" (ROBINS-I) tool to assess ROB for non-randomized studies of interventions (NRSI). It is important to establish consistency in its application and interpretation across review teams. In addition, it is important to understand if specialized training and guidance will improve the reliability of the results of the assessments. Therefore, the objective of this cross-sectional study is to establish the inter-rater reliability (IRR), inter-consensus reliability (ICR), and concurrent validity of ROBINS-I. Furthermore, as this is a relatively new tool, it is important to understand the barriers to using this tool (e.g., time to conduct assessments and reach consensus-evaluator burden). METHODS: Reviewers from four participating centers will appraise the ROB of a sample of NRSI publications using the ROBINS-I tool in two stages. For IRR and ICR, two pairs of reviewers will assess the ROB for each NRSI publication. In the first stage, reviewers will assess the ROB without any formal guidance. In the second stage, reviewers will be provided customized training and guidance. At each stage, each pair of reviewers will resolve conflicts and arrive at a consensus. To calculate the IRR and ICR, we will use Gwet's AC1 statistic. For concurrent validity, reviewers will appraise a sample of NRSI publications using both the New-castle Ottawa Scale (NOS) and ROBINS-I. We will analyze the concordance between the two tools for similar domains and for the overall judgments using Kendall's tau coefficient. To measure the evaluator burden, we will assess the time taken to apply the ROBINS-I (without and with guidance), and the NOS. To assess the impact of customized training and guidance on the evaluator burden, we will use the generalized linear models. We will use Microsoft Excel and SAS 9.4 to manage and analyze study data, respectively. DISCUSSION: The quality of evidence from systematic reviews that include NRS depends partly on the study-level ROB assessments. The findings of this study will contribute to an improved understanding of the ROBINS-I tool and how best to use it.


Subject(s)
Bias , Reproducibility of Results , Research Design , Cross-Sectional Studies , Humans
8.
Syst Rev ; 8(1): 335, 2019 12 23.
Article in English | MEDLINE | ID: mdl-31870434

ABSTRACT

BACKGROUND: Overviews of reviews (i.e., overviews) compile information from multiple systematic reviews to provide a single synthesis of relevant evidence for healthcare decision-making. Despite their increasing popularity, there are currently no systematically developed reporting guidelines for overviews. This is problematic because the reporting of published overviews varies considerably and is often substandard. Our objective is to use explicit, systematic, and transparent methods to develop an evidence-based and agreement-based reporting guideline for overviews of reviews of healthcare interventions (PRIOR, Preferred Reporting Items for Overviews of Reviews). METHODS: We will develop the PRIOR reporting guideline in four stages, using established methods for developing reporting guidelines in health research. First, we will establish an international and multidisciplinary expert advisory board that will oversee the conduct of the project and provide methodological support. Second, we will use the results of comprehensive literature reviews to develop a list of prospective checklist items for the reporting guideline. Third, we will use a modified Delphi exercise to achieve a high level of expert agreement on the list of items to be included in the PRIOR reporting guideline. We will identify and recruit a group of up to 100 international experts who will provide input into the guideline in three Delphi rounds: the first two rounds will occur via online survey, and the third round will occur during a smaller (8 to 10 participants) in-person meeting that will use a nominal group technique. Fourth, we will produce and publish the PRIOR reporting guideline. DISCUSSION: A systematically developed reporting guideline for overviews could help to improve the accuracy, completeness, and transparency of overviews. This, in turn, could help maximize the value and impact of overviews by allowing more efficient interpretation and use of their research findings.


Subject(s)
Biomedical Research , Checklist/standards , Guidelines as Topic/standards , Review Literature as Topic , Humans
9.
Syst Rev ; 8(1): 29, 2019 01 22.
Article in English | MEDLINE | ID: mdl-30670086

ABSTRACT

BACKGROUND: Overviews of reviews of healthcare interventions (overviews) integrate information from multiple systematic reviews (SRs) to provide a single synthesis of relevant evidence for decision-making. Overviews may identify multiple SRs that examine the same intervention for the same condition and include some, but not all, of the same primary studies. Different researchers use different approaches to manage these "overlapping SRs," but each approach has advantages and disadvantages. This study aimed to develop an evidence-based decision tool to help researchers make informed inclusion decisions when conducting overviews of healthcare interventions. METHODS: We used a two-stage process to develop the decision tool. First, we conducted a multiple case study to obtain empirical evidence upon which the tool is based. We systematically conducted seven overviews five times each, making five different decisions about which SRs to include in the overviews, for a total of 35 overviews; we then examined the impact of the five inclusion decisions on the overviews' comprehensiveness and challenges, within and across the seven overview cases. Second, we used a structured, iterative process to transform the evidence obtained from the multiple case study into an empirically based decision tool with accompanying descriptive text. RESULTS: The resulting decision tool contains four questions: (1) Do Cochrane SRs likely examine all relevant intervention comparisons and available data? (2) Do the Cochrane SRs overlap? (3) Do the non-Cochrane SRs overlap? (4) Are researchers prepared and able to avoid double-counting outcome data from overlapping SRs, by ensuring that each primary study's outcome data are extracted from overlapping SRs only once? Guidance is provided to help researchers answer each question, and empirical evidence is provided regarding the advantages, disadvantages, and potential trade-offs of the different inclusion decisions. CONCLUSIONS: This evidence-based decision tool is designed to provide researchers with the knowledge and means to make informed inclusion decisions in overviews. The tool can provide practical guidance and support for overview authors by helping them consider questions that could affect the comprehensiveness and complexity of their overviews. We hope this tool will be a useful resource for researchers conducting overviews, and we welcome discussion, testing, and refinement of the proposed tool.


Subject(s)
Decision Making , Decision Support Techniques , Delivery of Health Care , Research Personnel/psychology , Systematic Reviews as Topic , Humans , Research Design
10.
Syst Rev ; 8(1): 18, 2019 01 11.
Article in English | MEDLINE | ID: mdl-30635048

ABSTRACT

BACKGROUND: Overviews of reviews (overviews) compile information from multiple systematic reviews (SRs) to provide a single synthesis of relevant evidence for decision-making. Overviews may identify multiple SRs that examine the same intervention for the same condition and include some, but not all, of the same primary studies. There is currently limited guidance on whether and how to include these overlapping SRs in overviews. Our objectives were to assess how different inclusion decisions in overviews of healthcare interventions affect their comprehensiveness and results, and document challenges encountered when making different inclusion decisions in overviews. METHODS: We used five inclusion decisions to conduct overviews across seven topic areas, resulting in 35 overviews. The inclusion decisions were (1) include all Cochrane and non-Cochrane SRs, (2) include only Cochrane SRs, or consider all Cochrane and non-Cochrane SRs but include only non-overlapping SRs, and in the case of overlapping SRs, select (3) the Cochrane SR, (4) the most recent SR (by publication or search date), or (5) the highest quality SR (assessed using AMSTAR). For each topic area and inclusion scenario, we documented the amount of outcome data lost and changed and the challenges involved. RESULTS: When conducting overviews, including only Cochrane SRs, instead of all SRs, often led to loss/change of outcome data (median 31% of outcomes lost/changed; range 0-100%). Considering all Cochrane and non-Cochrane SRs but including only non-overlapping SRs and selecting the Cochrane SR for groups of overlapping SRs (instead of the most recent or highest quality SRs) allowed the most outcome data to be recaptured (median 42% of lost/changed outcome recaptured; range 28-86%). Across all inclusion scenarios, challenges were encountered when extracting data from overlapping SRs. CONCLUSIONS: Overlapping SRs present a methodological challenge for overview authors. This study demonstrates that different inclusion decisions affect the comprehensiveness and results of overviews in different ways, depending in part on whether Cochrane SRs examine all intervention comparisons relevant to the overview. Study results were used to develop an evidence-based decision tool that provides practical guidance for overview authors.


Subject(s)
Decision Making , Research Design , Systematic Reviews as Topic , Evidence-Based Medicine , Humans
11.
Syst Rev ; 6(1): 73, 2017 04 07.
Article in English | MEDLINE | ID: mdl-28388960

ABSTRACT

BACKGROUND: Overviews of systematic reviews (overviews) attempt to systematically retrieve and summarize the results of multiple systematic reviews (SRs) for a given condition or public health problem. Two prior descriptive analyses of overviews found substantial variation in the methodological approaches used in overviews, and deficiencies in reporting of key methodological steps. Since then, new methods have been developed so it is timely to update the prior descriptive analyses. The objectives are to: (1) investigate the epidemiological, descriptive, and reporting characteristics of a random sample of 100 overviews published from 2012 to 2016 and (2) compare these recently published overviews (2012-2016) to those published prior to 2012 (based on the prior descriptive analyses). METHODS: Medline, EMBASE, and CDSR will be searched for overviews published 2012-2016, using a validated search filter for overviews. Only overviews written in English will be included. All titles and abstracts will be screened by one review author; those deemed not relevant will be verified by a second person for exclusion. Full-texts will be assessed for inclusion by two reviewers independently. Of those deemed relevant, a random sample of 100 overviews will be selected for inclusion. Data extraction will be either performed by one reviewer with verification by a second reviewer or by one reviewer only depending on the complexity of the item. Discrepancies at any stage will be resolved by consensus or consulting a third person. Data will be extracted on the epidemiological, descriptive, and reporting characteristics of each overview. Data will be analyzed descriptively. When data are available for both time points (up to 2011 vs. 2012-2016), we will compare characteristics by calculating risk ratios or applying the Mann-Whitney test. DISCUSSION: Overviews are becoming increasingly valuable evidence syntheses, and the number of published overviews is increasing. However, former analyses found limitations in the conduct and reporting of overviews. This update of a recent sample of overviews will inform whether this has changed, while also identifying areas for further improvement. SYSTEMATIC REVIEW REGISTRATION: The review will not be registered in PROSPERO as it does not meet the eligibility criterion of dealing with health-related outcomes.


Subject(s)
Systematic Reviews as Topic , Data Interpretation, Statistical , Humans , Reproducibility of Results
12.
BMC Med Res Methodol ; 17(1): 48, 2017 03 23.
Article in English | MEDLINE | ID: mdl-28335734

ABSTRACT

BACKGROUND: Overviews of reviews (overviews) compile information from multiple systematic reviews (SRs) to provide a single synthesis of relevant evidence for decision-making. It is recommended that authors assess and report the methodological quality of SRs in overviews-for example, using A MeaSurement Tool to Assess systematic Reviews (AMSTAR). Currently, there is variation in whether and how overview authors assess and report SR quality, and limited guidance is available. Our objectives were to: examine methodological considerations involved in using AMSTAR to assess the quality of Cochrane and non-Cochrane SRs in overviews of healthcare interventions; identify challenges (and develop potential decision rules) when using AMSTAR in overviews; and examine the potential impact of considering methodological quality when making inclusion decisions in overviews. METHODS: We selected seven overviews of healthcare interventions and included all SRs meeting each overview's inclusion criteria. For each SR, two reviewers independently conducted AMSTAR assessments with consensus and discussed challenges encountered. We also examined the correlation between AMSTAR assessments and SR results/conclusions. RESULTS: Ninety-five SRs were included (30 Cochrane, 65 non-Cochrane). Mean AMSTAR assessments (9.6/11 vs. 5.5/11; p < 0.001) and inter-rater reliability (AC1 statistic: 0.84 vs. 0.69; "almost perfect" vs. "substantial" using the Landis & Koch criteria) were higher for Cochrane compared to non-Cochrane SRs. Four challenges were identified when applying AMSTAR in overviews: the scope of the SRs and overviews often differed; SRs examining similar topics sometimes made different methodological decisions; reporting of non-Cochrane SRs was sometimes poor; and some non-Cochrane SRs included other SRs as well as primary studies. Decision rules were developed to address each challenge. We found no evidence that AMSTAR assessments were correlated with SR results/conclusions. CONCLUSIONS: Results indicate that the AMSTAR tool can be used successfully in overviews that include Cochrane and non-Cochrane SRs, though decision rules may be useful to circumvent common challenges. Findings support existing recommendations that quality assessments of SRs in overviews be conducted independently, in duplicate, with a process for consensus. Results also suggest that using methodological quality to guide inclusion decisions (e.g., to exclude poorly conducted and reported SRs) may not introduce bias into the overview process.


Subject(s)
Decision Making , Evidence-Based Medicine , Treatment Outcome , Humans , Quality of Health Care , Reproducibility of Results
13.
Syst Rev ; 5(1): 190, 2016 11 14.
Article in English | MEDLINE | ID: mdl-27842604

ABSTRACT

BACKGROUND: Overviews of reviews (overviews) compile data from multiple systematic reviews to provide a single synthesis of relevant evidence for decision-making. Despite their increasing popularity, there is limited methodological guidance available for researchers wishing to conduct overviews. The objective of this scoping review is to identify and collate all published and unpublished documents containing guidance for conducting overviews examining the efficacy, effectiveness, and/or safety of healthcare interventions. Our aims were to provide a map of existing guidance documents; identify similarities, differences, and gaps in the guidance contained within these documents; and identify common challenges involved in conducting overviews. METHODS: We conducted an iterative and extensive search to ensure breadth and comprehensiveness of coverage. The search involved reference tracking, database and web searches (MEDLINE, EMBASE, DARE, Scopus, Cochrane Methods Studies Database, Google Scholar), handsearching of websites and conference proceedings, and contacting overview producers. Relevant guidance statements and challenges encountered were extracted, edited, grouped, abstracted, and presented using a qualitative metasummary approach. RESULTS: We identified 52 guidance documents produced by 19 research groups. Relatively consistent guidance was available for the first stages of the overview process (deciding when and why to conduct an overview, specifying the scope, and searching for and including systematic reviews). In contrast, there was limited or conflicting guidance for the latter stages of the overview process (quality assessment of systematic reviews and their primary studies, collecting and analyzing data, and assessing quality of evidence), and many of the challenges identified were also related to these stages. An additional, overarching challenge identified was that overviews are limited by the methods, reporting, and coverage of their included systematic reviews. CONCLUSIONS: This compilation of methodological guidance for conducting overviews of healthcare interventions will facilitate the production of future overviews and can help authors address key challenges they are likely to encounter. The results of this project have been used to identify areas where future methodological research is required to generate empirical evidence for overview methods. Additionally, these results have been used to update the chapter on overviews in the next edition of the Cochrane Handbook for Systematic Reviews of Interventions.


Subject(s)
Evidence-Based Medicine , Research Design , Research Personnel , Review Literature as Topic , Humans , Publications
14.
Proc Natl Acad Sci U S A ; 108(41): 16932-7, 2011 Oct 11.
Article in English | MEDLINE | ID: mdl-21930943

ABSTRACT

To explain the large, opposite effects of urea and glycine betaine (GB) on stability of folded proteins and protein complexes, we quantify and interpret preferential interactions of urea with 45 model compounds displaying protein functional groups and compare with a previous analysis of GB. This information is needed to use urea as a probe of coupled folding in protein processes and to tune molecular dynamics force fields. Preferential interactions between urea and model compounds relative to their interactions with water are determined by osmometry or solubility and dissected using a unique coarse-grained analysis to obtain interaction potentials quantifying the interaction of urea with each significant type of protein surface (aliphatic, aromatic hydrocarbon (C); polar and charged N and O). Microscopic local-bulk partition coefficients K(p) for the accumulation or exclusion of urea in the water of hydration of these surfaces relative to bulk water are obtained. K(p) values reveal that urea accumulates moderately at amide O and weakly at aliphatic C, whereas GB is excluded from both. These results provide both thermodynamic and molecular explanations for the opposite effects of urea and glycine betaine on protein stability, as well as deductions about strengths of amide NH--amide O and amide NH--amide N hydrogen bonds relative to hydrogen bonds to water. Interestingly, urea, like GB, is moderately accumulated at aromatic C surface. Urea m-values for protein folding and other protein processes are quantitatively interpreted and predicted using these urea interaction potentials or K(p) values.


Subject(s)
Betaine/pharmacology , Protein Denaturation/drug effects , Protein Stability/drug effects , Urea/pharmacology , Binding Sites , Hydrogen Bonding , Models, Chemical , Molecular Dynamics Simulation , Protein Folding/drug effects , Proteins/chemistry , Proteins/drug effects
SELECTION OF CITATIONS
SEARCH DETAIL
...