Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
BMC Med Res Methodol ; 19(1): 132, 2019 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-31253092

RESUMO

BACKGROUND: Stringent requirements exist regarding the transparency of the study selection process and the reliability of results. A 2-step selection process is generally recommended; this is conducted by 2 reviewers independently of each other (conventional double-screening). However, the approach is resource intensive, which can be a problem, as systematic reviews generally need to be completed within a defined period with a limited budget. The aim of the following methodological systematic review was to analyse the evidence available on whether single screening is equivalent to double screening in the screening process conducted in systematic reviews. METHODS: We searched Medline, PubMed and the Cochrane Methodology Register (last search 10/2018). We also used supplementary search techniques and sources ("similar articles" function in PubMed, conference abstracts and reference lists). We included all evaluations comparing single with double screening. Data were summarized in a structured, narrative way. RESULTS: The 4 evaluations included investigated a total of 23 single screenings (12 sets for screening involving 9 reviewers). The median proportion of missed studies was 5% (range 0 to 58%). The median proportion of missed studies was 3% for the 6 experienced reviewers (range: 0 to 21%) and 13% for the 3 reviewers with less experience (range: 0 to 58%). The impact of missing studies on the findings of meta-analyses had been reported in 2 evaluations for 7 single screenings including a total of 18,148 references. In 3 of these 7 single screenings - all conducted by the same reviewer (with less experience) - the findings would have changed substantially. The remaining 4 of these 7 screenings were conducted by experienced reviewers and the missing studies had no impact or a negligible on the findings of the meta-analyses. CONCLUSIONS: Single screening of the titles and abstracts of studies retrieved in bibliographic searches is not equivalent to double screening, as substantially more studies are missed. However, in our opinion such an approach could still represent an appropriate methodological shortcut in rapid reviews, as long as it is conducted by an experienced reviewer. Further research on single screening is required, for instance, regarding factors influencing the number of studies missed.


Assuntos
Indexação e Redação de Resumos/normas , Armazenamento e Recuperação da Informação/normas , Sistemas de Informação/normas , Revisões Sistemáticas como Assunto , Indexação e Redação de Resumos/métodos , Indexação e Redação de Resumos/estatística & dados numéricos , Humanos , Armazenamento e Recuperação da Informação/métodos , Sistemas de Informação/estatística & dados numéricos , PubMed/normas , PubMed/estatística & dados numéricos , Publicações/normas , Publicações/estatística & dados numéricos
2.
Int J Technol Assess Health Care ; 31(1-2): 54-8, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25963645

RESUMO

OBJECTIVES: A rapid scoping review was performed to support the development of a new clinical technology platform. An iterative sifting approach was adopted to address the challenges posed by the nature of the review question and the extremely large volume of search results to be sifted within the timescales of the review. METHODS: This study describes the iterative sifting approach applied in the scoping review and a preliminary validation of the methods applied. RESULTS: The searches performed for the rapid scoping review retrieved 27,198 records. This was the full set of records subjected to the staged, iterative sifting approach and the subsequent validation process. The iterative sifting approach involved the screening for relevance of 17,354 (i.e., 63.8 percent) of the 27,198 records. A list of fifty-three potential biomarker names was generated as a result of this iterative sifting method, of which nineteen were selected by clinical specialists for further scrutiny. The preliminary validation involved the exhaustive sifting of the remaining 9,844 previously unsifted records. The validation process identified sixteen additional potential biomarker names not identified by the iterative sifting process. The clinical specialists subsequently concluded that none were of further clinical interest. CONCLUSIONS: This study describes an approach to the screening of search records that can be successfully applied in appropriate review and decision problems to allow the prioritization of the most relevant search records and achieve time savings. Following further refinement and standardization, this iterative sifting method may have potential for further applications in reviews and other decision problems.


Assuntos
Ensaios Clínicos como Assunto , Literatura de Revisão como Assunto , Ferramenta de Busca/métodos , Humanos
3.
J Clin Epidemiol ; 173: 111466, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39019350

RESUMO

OBJECTIVES: The aim of this paper is to provide clinicians and authors of clinical guidelines or patient information with practical guidance on searching and choosing systematic reviews(s) (SR[s]) and, where adequate, on making use of SR(s). STUDY DESIGN AND SETTING: At the German conference of the Evidence-Based Medicine Network (EbM Network) a workshop on the topic was held to identify the most important areas where guidance for practice appears necessary. After the workshop, we established working groups. These included SR users with different backgrounds (eg, information specialists, epidemiologists) and working areas. Each working group developed and consented a draft guidance based on their expert knowledge and experiences. The results were presented to the entire group and finalized in an iterative process. RESULTS: We developed a practical guidance that answers questions that usually arise when choosing and using SR(s). (1) How to efficiently find high-quality SRs? (2) How to choose the most appropriate SR? (3) What to do if no SR of sufficient quality could be identified? In addition, we developed an algorithm that links these steps and accounts for their interaction. The resulting guidance is primarily directed at clinicians and developers of clinical practice guidelines or patient information resources. CONCLUSION: We suggest practical guidance for making the best use of SRs when answering a specific research question. The guidance may contribute to the efficient use of existing SRs. Potential benefits when using existing SRs should be always weighted against potential limitations.

4.
R Soc Open Sci ; 10(2): 210586, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36756069

RESUMO

Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target. The resulting checklist can be used for transparently communicating the rationale for selecting studies for replication.

5.
Syst Rev ; 12(1): 161, 2023 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-37705060

RESUMO

BACKGROUND: Systematic literature screening is a key component in systematic reviews. However, this approach is resource intensive as generally two persons independently of each other (double screening) screen a vast number of search results. To develop approaches for increasing efficiency, we tested the use of text mining to prioritize search results as well as the involvement of only one person (single screening) in the study selection process. METHOD: Our study is based on health technology assessments (HTAs) of drug and non-drug interventions. Using a sample size calculation, we consecutively included 11 searches resulting in 33 study selection processes. Of the three screeners for each search, two used screening tools with prioritization (Rayyan, EPPI Reviewer) and one a tool without prioritization. For each prioritization tool, we investigated the proportion of citations classified as relevant at three cut-offs or STOP criteria (after screening 25%, 50% and 75% of the citation set). For each STOP criterion, we measured sensitivity (number of correctly identified relevant studies divided by the total number of relevant studies in the study pool). In addition, we determined the number of relevant studies identified per single screening round and investigated whether missed studies were relevant to the HTA conclusion. RESULTS: Overall, EPPI Reviewer performed better than Rayyan and identified the vast majority (88%, Rayyan 66%) of relevant citations after screening half of the citation set. As long as additional information sources were screened, it was sufficient to apply a single-screening approach to identify all studies relevant to the HTA conclusion. Although many relevant publications (n = 63) and studies (n = 29) were incorrectly excluded, ultimately only 5 studies could not be identified at all in 2 of the 11 searches (1x 1 study, 1x 4 studies). However, their omission did not change the overall conclusion in any HTA. CONCLUSIONS: EPPI Reviewer helped to identify relevant citations earlier in the screening process than Rayyan. Single screening would have been sufficient to identify all studies relevant to the HTA conclusion. However, this requires screening of further information sources. It also needs to be considered that the credibility of an HTA may be questioned if studies are missing, even if they are not relevant to the HTA conclusion.


Assuntos
Mineração de Dados , Fonte de Informação , Humanos , Revisões Sistemáticas como Assunto , Avaliação da Tecnologia Biomédica
6.
Syst Rev ; 9(1): 184, 2020 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-32799923

RESUMO

BACKGROUND: Systematic reviews of medical devices are particularly challenging as the quality of evidence tends to be more limited than evidence on pharmaceutical products. This article describes the methods used to identify, select and critically appraise the best available evidence on selective internal radiation therapy devices for treating hepatocellular carcinoma, to inform a technology appraisal for the National Institute for Health and Care Excellence. METHODS: A comprehensive search of ten medical databases and six grey literature sources was undertaken to identify studies of three devices (TheraSphere®, SIR-Spheres® and QuiremSpheres®) for treating hepatocellular carcinoma. The large evidence base was scoped before deciding what level of evidence to include for data extraction and critical appraisal. The methodological quality of the included studies was assessed using criteria relevant to each study design. RESULTS: Electronic searches identified 4755 records; over 1000 met eligibility criteria after screening titles and abstracts. A hierarchical process was used to scope these records, prioritising comparative studies over non-comparative studies, where available. One hundred ninety-four full papers were ordered; 64 met the eligibility criteria. For each intervention, studies were prioritised by study design and applicability to current UK practice, resulting in 20 studies subjected to critical appraisal and data extraction. Only two trials had a low overall risk of bias. In view of the poor quality of the research evidence, our technology appraisal focused on the two higher quality trials, including a thorough critique of their reliability and generalisability to current UK practice. The 18 poorer quality studies were briefly summarised; many were very small and results were often contradictory. No definitive conclusions could be drawn from the poorer quality research evidence available. CONCLUSIONS: A systematic, pragmatic process was used to select and critically appraise the vast quantity of research evidence available in order to present the most reliable evidence on which to develop recommendations. SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42019128383.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/radioterapia , Humanos , Neoplasias Hepáticas/radioterapia , Reprodutibilidade dos Testes , Revisões Sistemáticas como Assunto , Tecnologia
7.
Syst Rev ; 9(1): 293, 2020 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-33308292

RESUMO

BACKGROUND: Despite existing research on text mining and machine learning for title and abstract screening, the role of machine learning within systematic literature reviews (SLRs) for health technology assessment (HTA) remains unclear given lack of extensive testing and of guidance from HTA agencies. We sought to address two knowledge gaps: to extend ML algorithms to provide a reason for exclusion-to align with current practices-and to determine optimal parameter settings for feature-set generation and ML algorithms. METHODS: We used abstract and full-text selection data from five large SLRs (n = 3089 to 12,769 abstracts) across a variety of disease areas. Each SLR was split into training and test sets. We developed a multi-step algorithm to categorize each citation into the following categories: included; excluded for each PICOS criterion; or unclassified. We used a bag-of-words approach for feature-set generation and compared machine learning algorithms using support vector machines (SVMs), naïve Bayes (NB), and bagged classification and regression trees (CART) for classification. We also compared alternative training set strategies: using full data versus downsampling (i.e., reducing excludes to balance includes/excludes because machine learning algorithms perform better with balanced data), and using inclusion/exclusion decisions from abstract versus full-text screening. Performance comparisons were in terms of specificity, sensitivity, accuracy, and matching the reason for exclusion. RESULTS: The best-fitting model (optimized sensitivity and specificity) was based on the SVM algorithm using training data based on full-text decisions, downsampling, and excluding words occurring fewer than five times. The sensitivity and specificity of this model ranged from 94 to 100%, and 54 to 89%, respectively, across the five SLRs. On average, 75% of excluded citations were excluded with a reason and 83% of these citations matched the reviewers' original reason for exclusion. Sensitivity significantly improved when both downsampling and abstract decisions were used. CONCLUSIONS: ML algorithms can improve the efficiency of the SLR process and the proposed algorithms could reduce the workload of a second reviewer by identifying exclusions with a relevant PICOS reason, thus aligning with HTA guidance. Downsampling can be used to improve study selection, and improvements using full-text exclusions have implications for a learn-as-you-go approach.


Assuntos
Mineração de Dados , Aprendizado de Máquina , Algoritmos , Teorema de Bayes , Humanos , Máquina de Vetores de Suporte , Revisões Sistemáticas como Assunto
8.
Res Synth Methods ; 10(4): 539-545, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31272125

RESUMO

BACKGROUND: Although dual independent review of search results by two reviewers is generally recommended for systematic reviews, there are not consistent recommendations regarding the timing of the use of the second reviewer. This study compared the use of a complete dual review approach, with two reviewers in both the title/abstract screening stage and the full-text screening stage, as compared with a limited dual review approach, with two reviewers only in the full-text stage. METHODS: This study was performed within the context of a large systematic review. Two reviewers performed a complete dual review of 15 000 search results and a limited dual review of 15 000 search results. The number of relevant studies mistakenly excluded by highly experienced reviewers in the complete dual review was compared with the number mistakenly excluded during the full-text stage of the limited dual review. RESULTS: In the complete dual review approach, an additional 6.6% to 9.1% of eligible studies were identified during the title/abstract stage by using two reviewers, and an additional 6.6% to 11.9% of eligible studies were identified during the full-text stage by using two reviewers. In the limited dual review approach, an additional 4.4% to 5.3% of eligible studies were identified with the use of two reviewers. CONCLUSIONS: Using a second reviewer throughout the entire study screening process can increase the number of relevant studies identified for use in a systematic review. Systematic review performers should consider using a complete dual review process to ensure all relevant studies are included in their review.


Assuntos
Variações Dependentes do Observador , Projetos de Pesquisa , Revisões Sistemáticas como Assunto , Algoritmos , Bases de Dados Bibliográficas , Humanos , Armazenamento e Recuperação da Informação/métodos , Publicações Periódicas como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Reprodutibilidade dos Testes
9.
J Clin Epidemiol ; 106: 121-135, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30312656

RESUMO

OBJECTIVES: The aim of the article was to identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews. STUDY DESIGN AND SETTING: A systematic review was conducted, searching MEDLINE, EMBASE, and the Cochrane Library from inception to September 1, 2016. Quality appraisal of included studies was undertaken using a modified Quality Assessment of Diagnostic Accuracy Studies 2, and key results on accuracy, reliability, efficiency of a methodology, or impact on results and conclusions were extracted. RESULTS: After screening 5,600 titles and abstracts and 245 full-text articles, 37 studies were included. For screening, studies supported the involvement of two independent experienced reviewers and the use of Google Translate when screening non-English articles. For data abstraction, studies supported involvement of experienced reviewers (especially for continuous outcomes) and two independent reviewers, use of dual monitors, graphical data extraction software, and contacting authors. For quality appraisal, studies supported intensive training, piloting quality assessment tools, providing decision rules for poorly reported studies, contacting authors, and using structured tools if different study designs are included. CONCLUSION: Few studies exist documenting common systematic review practices. Included studies support several systematic review practices. These results provide an updated evidence-base for current knowledge synthesis guidelines and methods requiring further research.


Assuntos
Indexação e Redação de Resumos , Revisões Sistemáticas como Assunto , Humanos , Indexação e Redação de Resumos/normas , Estudos Transversais , Ensaios Clínicos Controlados Aleatórios como Assunto
10.
Syst Rev ; 7(1): 64, 2018 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-29695296

RESUMO

BACKGROUND: Screening candidate studies for inclusion in a systematic review is time-consuming when conducted manually. Automation tools could reduce the human effort devoted to screening. Existing methods use supervised machine learning which train classifiers to identify relevant words in the abstracts of candidate articles that have previously been labelled by a human reviewer for inclusion or exclusion. Such classifiers typically reduce the number of abstracts requiring manual screening by about 50%. METHODS: We extracted four key characteristics of observational studies (population, exposure, confounders and outcomes) from the text of titles and abstracts for all articles retrieved using search strategies from systematic reviews. Our screening method excluded studies if they did not meet a predefined set of characteristics. The method was evaluated using three systematic reviews. Screening results were compared to the actual inclusion list of the reviews. RESULTS: The best screening threshold rule identified studies that mentioned both exposure (E) and outcome (O) in the study abstract. This screening rule excluded 93.7% of retrieved studies with a recall of 98%. CONCLUSIONS: Filtering studies for inclusion in a systematic review based on the detection of key study characteristics in abstracts significantly outperformed standard approaches to automated screening and appears worthy of further development and evaluation.


Assuntos
Automação , Pesquisa Biomédica , Aprendizado de Máquina , Revisões Sistemáticas como Assunto , Humanos , Automação/métodos
11.
J Clin Epidemiol ; 98: 53-61, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29476922

RESUMO

OBJECTIVES: To evaluate whether the reporting of search strategies and the primary study selection process in dental systematic reviews is reproducible. STUDY DESIGN AND SETTING: A survey of systematic reviews published in MEDLINE-indexed dental journals from June 2015 to June 2016 was conducted. Study selection was performed independently by two authors, and the reproducibility of the selection process was assessed using a tool consisting of 12 criteria. Regression analyses were implemented to evaluate any associations between degrees of reporting (measured by the number of items positively answered) and journal impact factor (IF), presence of meta-analysis, and number of citations of the systematic review in Google Scholar. RESULTS: Five hundred and thirty systematic reviews were identified. Following our 12 criteria, none of the systematic reviews had complete reporting of the search strategies and selection process. Eight (1.5%) systematic reviews reported the list of excluded articles (with reasons for exclusion) after title and abstract assessment. Systematic reviews with more positive answers to the criteria were significantly associated with higher journal IF, number of citations, and inclusion of meta-analysis. CONCLUSION: Search strategies and primary study selection process in systematic reviews published in MEDLINE-indexed dental journals may not be fully reproducible.


Assuntos
Odontologia/estatística & dados numéricos , MEDLINE/estatística & dados numéricos , Publicações Periódicas como Assunto/estatística & dados numéricos , Revisões Sistemáticas como Assunto , Indexação e Redação de Resumos , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Armazenamento e Recuperação da Informação/estatística & dados numéricos , Fator de Impacto de Revistas , MEDLINE/normas , Metanálise como Assunto , Análise de Regressão , Reprodutibilidade dos Testes
12.
Res Synth Methods ; 8(3): 366-386, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28677322

RESUMO

Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews.


Assuntos
Crowdsourcing , Literatura de Revisão como Assunto , Algoritmos , Humanos
13.
Res Synth Methods ; 5(1): 31-49, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26054024

RESUMO

In scoping reviews, boundaries of relevant evidence may be initially fuzzy, with refined conceptual understanding of interventions and their proposed mechanisms of action an intended output of the scoping process rather than its starting point. Electronic searches are therefore sensitive, often retrieving very large record sets that are impractical to screen in their entirety. This paper describes methods for applying and evaluating the use of text mining (TM) technologies to reduce impractical screening workload in reviews, using examples of two extremely large-scale scoping reviews of public health evidence (choice architecture (CA) and economic environment (EE)). Electronic searches retrieved >800,000 (CA) and >1 million (EE) records. TM technologies were used to prioritise records for manual screening. TM performance was measured prospectively. TM reduced manual screening workload by 90% (CA) and 88% (EE) compared with conventional screening (absolute reductions of ≈430 000 (CA) and ≈378 000 (EE) records). This study expands an emerging corpus of empirical evidence for the use of TM to expedite study selection in reviews. By reducing screening workload to manageable levels, TM made it possible to assemble and configure large, complex evidence bases that crossed research discipline boundaries. These methods are transferable to other scoping and systematic reviews incorporating conceptual development or explanatory dimensions.


Assuntos
Mineração de Dados/métodos , Processamento de Linguagem Natural , Publicações Periódicas como Assunto , Literatura de Revisão como Assunto , Vocabulário Controlado , Carga de Trabalho , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA