Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 105
Filter
1.
Acta Med Port ; 37(7-8): 547-555, 2024 Jul 01.
Article in Portuguese | MEDLINE | ID: mdl-38950608

ABSTRACT

In recent years, as a result of the dramatic increase in the number of systematic reviews, a new type of systematic review, the 'systematic reviews of systematic reviews', also known as umbrella reviews, reviews of reviews, meta-reviews or synthesis of review, was developed. The aim of this article is to provide recommendations on how this type of systematic review should be conducted and reported to ensure its quality and usefulness. These reviews are designed to compile evidence from multiple systematic reviews of interventions into an accessible and usable document and are one of the highest levels of evidence synthesis.


Nos últimos anos, em consequência do aumento dramático do número de revisões sistemáticas, surgiu um novo tipo de revisões sistemáticas, as revi- sões sistemáticas das revisões sistemáticas, também conhecidas como umbrella reviews, reviews of reviews, meta-reviews, ou synthesis of review. O objetivo deste artigo é fornecer recomendações sobre como este tipo de revisão sistemática deve ser conduzido e relatado para garantir a sua qualidade e utilidade. Estas revisões são concebidas para compilar evidências de múltiplas revisões sistemáticas de intervenções num documento acessível e utilizável e constituem um dos níveis mais elevados de síntese de evidência.


Subject(s)
Review Literature as Topic , Humans , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards
2.
Zhonghua Yi Xue Za Zhi ; 104(21): 1911-1917, 2024 Jun 04.
Article in Chinese | MEDLINE | ID: mdl-38825937

ABSTRACT

The number of mixed methods systematic reviews (MMSRs) published internationally is increasing day by day, thanks to the continuous development and improvement of MMSRs methodological guidelines and reporting specification, which effectively promote the depth and breadth of evidence synthesis and integration results. However, the application of this method has yet to be popularized in China. With the continuous development of mixed methods research and evidence-based medicine in our country, the number of MMSRs will gradually increase. This paper aims to analyze the reporting specifications for MMSRs with cases to improve the quality of evidence integration and reporting standardization of domestic relevant researchers in MMSRs.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards , Research Design , Evidence-Based Medicine/standards , Review Literature as Topic , Humans
3.
Z Evid Fortbild Qual Gesundhwes ; 187: 95-99, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38744601

ABSTRACT

With each update of meta-analyses from living systematic reviews, treatment effects and their confidence intervals are recalculated. This often raises the question whether or not multiplicity is an issue and whether a method to adjust for multiplicity is needed. It seems that answering these questions is not that straightforward. We approach this matter by considering the context of systematic reviews and pointing out existing methods for handling multiplicity in meta-analysis. We conclude that multiplicity is not a relevant issue in living systematic reviews when they are planned with the aim to provide up-to-date evidence, without any direct control on the decision over future research. Multiplicity might be an issue, though, in living systematic reviews designed under a protocol involving a "stopping decision", which can be the case in living guideline development or in reimbursement decisions. Several appropriate methods exist for handling multiplicity in meta-analysis. Existing methods, however, are also associated with several technical and conceptual limitations, and could be improved in future methodological projects. To better decide whether an adjustment for multiplicity is necessary at all, authors and users of living systematic reviews should be aware of the context of the work and question whether there is a dependency between the effect estimates of the living systematic review and its stopping/updating or an influence on future research.


Subject(s)
Meta-Analysis as Topic , Humans , Systematic Reviews as Topic/standards , Evidence-Based Medicine/standards , Research Design/standards
4.
Rheumatol Int ; 44(7): 1275-1281, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38683352

ABSTRACT

The increasing adoption of real-world studies in healthcare for decision making and planning has further necessitated the need for a specific quality assessment tool for evidence synthesis. This study aimed to develop a quality assessment tool for systematic reviews (SR) and meta-analysis (MA) involving real-world studies (QATSM-RWS) using a formal consensus method. Based on scoping review, the authors identified a list of items for possible inclusion in the quality assessment tool. A Delphi survey was formulated based on the identified items. A total of 89 experts, purposively recruited, with research experience in real-world data were invited to participate in the first round of Delphi survey. The participants who responded in the first Delphi round were invited to participate (n = 15) in the phrasing of the items. Strong level of agreement was found on the proposed list of items after the first round of Delphi. A rate of agreement ≥ 0.70 was used to define which items to keep in the tool. A list of 14 items emerged as suitable for QATSM-RWS. The items were structured under five domains: introduction, methods, results, discussions, and others. All participants agreed with the proposed phrasing of the items. This is the first study that has developed a specific tool that can be used to appraise the quality of SR and MA involving real-world studies. QATSM-RWS may be used by policymakers, clinicians, and practitioners when evaluating and generating real-world evidence. This tool is now undergoing validation process.


Subject(s)
Consensus , Delphi Technique , Meta-Analysis as Topic , Systematic Reviews as Topic , Humans , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards
5.
Ugeskr Laeger ; 186(13)2024 03 25.
Article in Danish | MEDLINE | ID: mdl-38533856

ABSTRACT

A systematic review provides an overview of primary studies investigating a given research question, e.g., the effect of a certain treatment. Individual study results are sometimes synthesised in a meta-analysis. A critical reader should consider whether the systematic review is relevant and reliable, e.g., does it follow a protocol, address the risk of bias, and consider potential heterogeneity. PRISMA 2020 guideline recommends a minimum set of items that should be reported in a systematic review article, and AMSTAR 2 and ROBIS are tools for critical appraisal of systematic reviews.


Subject(s)
Systematic Reviews as Topic , Bias , Systematic Reviews as Topic/standards
6.
J Clin Epidemiol ; 170: 111331, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38552725

ABSTRACT

OBJECTIVES: To generate a bank of items describing application and interpretation errors that can arise in pairwise meta-analyses in systematic reviews of interventions. STUDY DESIGN AND SETTING: MEDLINE, Embase, and Scopus were searched to identify studies describing types of errors in meta-analyses. Descriptions of errors and supporting quotes were extracted by multiple authors. Errors were reviewed at team meetings to determine if they should be excluded, reworded, or combined with other errors, and were categorized into broad categories of errors and subcategories within. RESULTS: Fifty articles met our inclusion criteria, leading to the identification of 139 errors. We identified 25 errors covering data extraction/manipulation, 74 covering statistical analyses, and 40 covering interpretation. Many of the statistical analysis errors related to the meta-analysis model (eg, using a two-stage strategy to determine whether to select a fixed or random-effects model) and statistical heterogeneity (eg, not undertaking an assessment for statistical heterogeneity). CONCLUSION: We generated a comprehensive bank of possible errors that can arise in the application and interpretation of meta-analyses in systematic reviews of interventions. This item bank of errors provides the foundation for developing a checklist to help peer reviewers detect statistical errors.


Subject(s)
Meta-Analysis as Topic , Humans , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards , Data Interpretation, Statistical , Research Design/standards
7.
BMC Med Res Methodol ; 24(1): 45, 2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38389063

ABSTRACT

BACKGROUND: Scoping reviews have emerged as a valuable method for synthesizing emerging evidence, providing a comprehensive contextual overview, and influencing policy and practice developments. The objective of this study is to provide an overview of scoping reviews conducted in Chinese academic institutions over the last decades. METHOD: We conducted a comprehensive search of nine databases and six grey literature databases for scoping reviews conducted in Chinese academic institutions. The reporting quality of the included reviews was assessed using the Preferred Reporting Items for PRISMA-ScR checklist. We performed both quantitative and qualitative analyses, examining the conduct of the scoping reviews and exploring the breadth of research topics covered. We used Chi-squared and Wilcoxon rank-sum tests to compare methodological issues and reporting quality in English and Chinese-language reviews. RESULTS: A total of 392 reviews published between 2013 and 2022 were included, 238 English-reported reviews and 154 Chinese-reported reviews, respectively. The primary purposes of these reviews were to map and summarize the evidence, with a particular focus on health and nursing topics. 98.7% of reviews explicitly used the term "scoping review", and the Arksey and O'Malley framework was the most frequently cited framework. Thirty-five English-reported scoping reviews provided a protocol for scoping review. PubMed was the most common source in English-reported reviews and CNKI in Chinese-reported reviews. Reviews published in English were more likely to search the grey literature (P = 0.005), consult information specialists (P < 0.001) and conduct an updated search (P = 0.012) than those in Chinese. Reviews published in English had a significantly high score compared to those published in Chinese (16 vs. 14; P < 0.001). The reporting rates in English-reported reviews were higher than those in Chinese reviews for seven items, but lower for structured summary (P < 0.001), eligibility criteria (P < 0.001), data charting process (P = 0.009) and data items (P = 0.015). CONCLUSION: There has been a significant increase in the number of scoping reviews conducted in Chinese academic institutions each year since 2020. While the research topics covered are diverse, the overall reporting quality of these reviews is need to be improved. And there is a need for greater standardization in the conduct of scoping reviews in Chinese academic institutions.


Subject(s)
Systematic Reviews as Topic , China , Databases, Factual , Language , Systematic Reviews as Topic/standards
8.
Fertil Steril ; 121(6): 918-920, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38309515

ABSTRACT

When evidence from randomized controlled trials about the effectiveness and safety of an intervention is unclear, researchers may choose to review the nonrandomized evidence. All systematic reviews pose considerable challenges, and the level of methodological expertise required to undertake a useful review of nonrandomized intervention studies is both high and often severely underestimated. Using the example of the endometrial receptivity array, we review some common, critical flaws in systematic reviews of this nature, including errors in critical appraisal and meta-analysis.


Subject(s)
Observational Studies as Topic , Humans , Observational Studies as Topic/methods , Observational Studies as Topic/standards , Female , Meta-Analysis as Topic , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards , Research Design/standards , Endometrium/pathology , Evidence-Based Medicine/standards , Pregnancy
9.
Syst Rev ; 13(1): 24, 2024 01 12.
Article in English | MEDLINE | ID: mdl-38217029

ABSTRACT

BACKGROUND: This systematic review aimed to investigate the relationship between retraction status and the methodology quality in the retracted non-Cochrane systematic review. METHOD: PubMed, Web of Science, and Scopus databases were searched with keywords including systematic review, meta-analysis, and retraction or retracted as a type of publication until September 2023. There were no time or language restrictions. Non-Cochrane medical systematic review studies that were retracted were included in the present study. The data related to the retraction status of the articles were extracted from the retraction notice and Retraction Watch, and the quality of the methodology was evaluated with the AMSTAR-2 checklist by two independent researchers. Data were analyzed in the Excel 2019 and SPSS 21 software. RESULT: Of the 282 systematic reviews, the corresponding authors of 208 (73.75%) articles were from China. The average interval between publish and retraction of the article was about 23 months and about half of the non-Cochrane systematic reviews were retracted in the last 4 years. The most common reasons for retractions were fake peer reviews and unreliable data, respectively. Editors and publishers were the most retractors or requestors for retractions. More than 86% of the retracted non-Cochrane SRs were published in journals with an impact factor above two and had a critically low quality. Items 7, 9, and 13 among the critical items of the AMSTAR-2 checklist received the lowest scores. DISCUSSION AND CONCLUSION: There was a significant relationship between the reasons of retraction and the quality of the methodology (P-value < 0.05). Plagiarism software and using the Cope guidelines may decrease the time of retraction. In some countries, strict rules for promoting researchers increase the risk of misconduct. To avoid scientific errors and improve the quality of systematic reviews/meta-analyses (SRs/MAs), it is better to create protocol registration and retraction guidelines in each journal for SRs/MAs.


Subject(s)
Biomedical Research , Retraction of Publication as Topic , Humans , Checklist , China , Plagiarism , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards , Meta-Analysis as Topic
10.
Musculoskelet Sci Pract ; 69: 102902, 2024 02.
Article in English | MEDLINE | ID: mdl-38211435

ABSTRACT

BACKGROUND: There are no studies investigating the methodological and report quality of systematic reviews of non-pharmacological interventions for musculoskeletal pain management among children and adolescents. OBJECTIVE: To evaluate the methodological and reporting quality of systematic reviews on conservative non-pharmacological pain management in children and adolescents with musculoskeletal pain. METHODS: Searches were conducted on the Cochrane Database of Systematic Reviews, Medline, Embase, and three other databases. Two pairs of reviewers independently assessed each article according to the predetermined selection criteria. We assessed the methodological quality of systematic reviews, using the AMSTAR 2 checklist and the quality of reporting, using PRISMA checklist. Descriptive analysis was used to summarise the characteristics of all included systematic reviews. The percentage of systematic reviews achieving each item from the AMSTAR 2, PRISMA checklist and the overall confidence in the results were described. RESULTS: We included 17 systematic reviews of conservative non-pharmacological pain management for musculoskeletal pain in children and adolescents. Of the 17 systematic reviews included, nine (53%) were rated as "critically low", seven (41%) were rated as "low", and one (6%) was rated as "high" methodological quality by AMSTAR-2. The reporting quality by items from PRISMA range from 17.6% (95% CI 6.2 to 41) to 100% (95% CI 81.6 to 100). CONCLUSION: This systematic review of physical interventions in children and adolescents showed overall 'very low' to 'high' methodological quality and usually poor reporting quality.


Subject(s)
Musculoskeletal Pain , Systematic Reviews as Topic , Adolescent , Child , Humans , Checklist , Musculoskeletal Pain/therapy , Pain Management/methods , Research Report/standards , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards
11.
Dev Med Child Neurol ; 66(4): 415-421, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37528533

ABSTRACT

Many sources document problems that jeopardize the trustworthiness of systematic reviews. This is a major concern given their potential to influence patient care and impact people's lives. Responsibility for producing trustworthy conclusions on the evidence in systematic reviews is borne primarily by authors who need the necessary training and resources to correctly report on the current knowledge base. Peer reviewers and editors are also accountable; they must ensure that systematic reviews are accurate by demonstrating proper methods. To support all these stakeholders, we attempt to distill the sprawling guidance that is currently available in our recent co-publication about best tools and practices for systematic reviews. We specifically address how to meet methodological conduct standards applicable to key components of systematic reviews. In this complementary invited review, we place these standards in the context of good scholarship principles for systematic review development. Our intention is to reach a broad audience and potentially improve the trustworthiness of evidence syntheses published in the developmental medicine literature and beyond.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards
13.
Br J Pharmacol ; 181(1): 180-210, 2024 01.
Article in English | MEDLINE | ID: mdl-37282770

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards , Research Design
14.
PLoS One ; 18(12): e0295864, 2023.
Article in English | MEDLINE | ID: mdl-38096136

ABSTRACT

INTRODUCTION: The PRISMA guidelines were published in 2009 to address inadequate reporting of key methodological details in systematic reviews and meta-analyses (SRs/MAs). This study sought to assess the impact of PRISMA on the quality of reporting in the full text of dental medicine journals. METHODS: This study assessed the impact of PRISMA (2009) on thirteen methodological details in SRs/MAs published in the highest-impact dental medicine journals between 1993-2009 (n = 211) and 2012-2018 (n = 618). The study further examined the rate of described use of PRISMA in the abstract or full text of included studies published post- PRISMA and the impact of described use of PRISMA on level of reporting. This study also examined potential effects of inclusion of PRISMA in Instructions for Authors, along with study team characteristics. RESULTS: The number of items reported in SRs/MAs increased following the publication of PRISMA (pre-PRISMA: M = 7.83, SD = 3.267; post-PRISMA: M = 10.55, SD = 1.4). Post-PRISMA, authors rarely mention PRISMA in abstracts (8.9%) and describe the use of PRISMA in the full text in 59.87% of SRs/MAs. The described use of PRISMA within the full text indicates that its intent (guidance for reporting) is not well understood, with over a third of SRs/MAs (35.6%) describing PRISMA as guiding the conduct of the review. However, any described use of PRISMA was associated with improved reporting. Among author team characteristics examined, only author team size had a positive relationship with improved reporting. CONCLUSION: Following the 2009 publication of PRISMA, the level of reporting of key methodological details improved for systematic reviews/meta-analyses published in the highest-impact dental medicine journals. The positive relationship between reference to PRISMA in the full text and level of reporting provides further evidence of the impact of PRISMA on improving transparent reporting in dental medicine SRs/MAs.


Subject(s)
Dentistry , Periodicals as Topic , Systematic Reviews as Topic , Systematic Reviews as Topic/standards , Meta-Analysis as Topic
15.
JBJS Rev ; 11(6)2023 06 01.
Article in English | MEDLINE | ID: mdl-37285444

ABSTRACT

¼ Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.¼ A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.¼ Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards
17.
Medicine (Baltimore) ; 101(40): e30994, 2022 Oct 07.
Article in English | MEDLINE | ID: mdl-36221406

ABSTRACT

BACKGROUND: Foot drop is a common complication in post-stroke. Patients with foot drop are at high risk for falls and fall-related injuries. Accordingly, it can reduce independence and quality of life in patients. Clinical studies have confirmed that acupuncture is effective in treating foot drop in post-stroke. However, there is a lack of systematic review exploring the efficacy and safety of acupuncture treatment. This study aims to assess the efficacy and safety of acupuncture in the treatment of foot drop in poststroke from the results of randomized controlled trials. METHODS: We will search articles in 8 electronic databases including the Cochrane Central Register of Controlled Trials, the Web of Science, PubMed, Embase, the China National Knowledge Infrastructure, the Chinese Biomedical Literature Database, Wanfang Data Database, and the Chinese Scientific Journal Database for RCTs of acupuncture treated foot drop in post-stroke from their inception to 10 August 2022. We will analyze the data meeting the inclusion criteria with the RevMan V.5.4 software. Two authors will assess the quality of the study with the Cochrane collaborative risk bias tool. We will evaluate the certainty of the estimated evidence with the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) method. Data analysis will be performed using STATA 16.0. RESULTS: This study will review and evaluate the available evidence for the treatment of foot drop in post-stroke using acupuncture. CONCLUSION SUBSECTIONS: This study will determine the efficacy and safety of acupuncture applied to post-stroke individuals with foot drop.


Subject(s)
Acupuncture Therapy , Meta-Analysis as Topic , Peroneal Neuropathies , Research Design , Stroke , Systematic Reviews as Topic , Humans , Acupuncture Therapy/adverse effects , Acupuncture Therapy/methods , Peroneal Neuropathies/etiology , Peroneal Neuropathies/therapy , Quality of Life , Randomized Controlled Trials as Topic , Stroke/complications , Stroke/therapy , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards , Data Analysis
18.
Syst Rev ; 11(1): 145, 2022 07 18.
Article in English | MEDLINE | ID: mdl-35851418

ABSTRACT

BACKGROUND: Many published reviews do not meet the widely accepted PRISMA standards for systematic reviews and meta-analysis. Campbell Collaboration and Cochrane reviews are expected to meet even more rigorous standards, but their adherence to these standards is uneven. For example, a newly updated Campbell systematic review of school-based anti-bullying interventions does not appear to meet many of the Campbell Collaboration's mandatory methodological standards. ISSUES: In this commentary, we document methodological problems in the Campbell Collaboration's new school-based anti-bullying interventions review, including (1) unexplained deviations from the protocol; (2) inadequate documentation of search strategies; (3) inconsistent reports on the number of included studies; (4) undocumented risk of bias ratings; (5) assessments of selective outcome reporting bias that are not transparent, not replicable, and appear to systematically underestimate risk of bias; (6) unreliable assessments of risk of publication bias; (7) use of a composite scale that conflates distinct risks of bias; and (8) failure to consider issues related to the strength of the evidence and risks of bias in interpreting results and drawing conclusions. Readers who are unaware of these problems may place more confidence in this review than is warranted. Campbell Collaboration editors declined to publish our comments and declined to issue a public statement of concern about this review. CONCLUSIONS: Systematic reviews are expected to use transparent methods and follow relevant methodological standards. Readers should be concerned when these expectations are not met, because transparency and rigor enhance the trustworthiness of results and conclusions. In the tradition of Donald T. Campbell, there is need for more public debate about the methods and conclusions of systematic reviews, and greater clarity regarding applications of (and adherence to) published standards for systematic reviews.


Subject(s)
Bullying , Schools , Systematic Reviews as Topic , Bias , Bullying/prevention & control , Humans , Systematic Reviews as Topic/standards
19.
J Clin Pharm Ther ; 47(2): 129-134, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34714560

ABSTRACT

WHAT IS KNOWN AND OBJECTIVE: Scoping reviews are a valuable evidence synthesis methodology. They can be used to map the evidence related to any topic to allow examination of practice, methods, policy and where (and how) future research could be undertaken. As such, they are a useful form of evidence synthesis for pharmacy clinicians, researchers and policymakers to review a broad range of evidence sources. COMMENT: This commentary presents the most comprehensive and up to date methodology for scoping reviews published by Joanna Briggs Institute (JBI). This approach builds upon two older approaches by Arksey and O'Malley, and Levac. To assist reviewers working in the field of pharmacy with planning and conducting scoping reviews, this paper describes how to undertake scoping reviews from inception to publication with specific examples related to pharmacy topics. WHAT IS NEW AND CONCLUSION: The JBI scoping review methodology is a valuable evidence synthesis approach to the field of pharmacy and therapeutics. This approach can assist pharmacy clinicians, researchers and policymakers to gain an understanding of the extant literature, to identify gaps, to explore concepts, characteristics and to examine current practice.


Subject(s)
Systematic Reviews as Topic/methods , Administrative Personnel , Algorithms , Humans , Pharmacists , Research Design , Research Personnel , Systematic Reviews as Topic/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...