Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Base de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
BMJ Open ; 14(7): e084124, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969371

RESUMEN

BACKGROUND: Systematic reviews (SRs) are being published at an accelerated rate. Decision-makers may struggle with comparing and choosing between multiple SRs on the same topic. We aimed to understand how healthcare decision-makers (eg, practitioners, policymakers, researchers) use SRs to inform decision-making and to explore the potential role of a proposed artificial intelligence (AI) tool to assist in critical appraisal and choosing among SRs. METHODS: We developed a survey with 21 open and closed questions. We followed a knowledge translation plan to disseminate the survey through social media and professional networks. RESULTS: Our survey response rate was lower than expected (7.9% of distributed emails). Of the 684 respondents, 58.2% identified as researchers, 37.1% as practitioners, 19.2% as students and 13.5% as policymakers. Respondents frequently sought out SRs (97.1%) as a source of evidence to inform decision-making. They frequently (97.9%) found more than one SR on a given topic of interest to them. Just over half (50.8%) struggled to choose the most trustworthy SR among multiple. These difficulties related to lack of time (55.2%), or difficulties comparing due to varying methodological quality of SRs (54.2%), differences in results and conclusions (49.7%) or variation in the included studies (44.6%). Respondents compared SRs based on the relevance to their question of interest, methodological quality, and recency of the SR search. Most respondents (87.0%) were interested in an AI tool to help appraise and compare SRs. CONCLUSIONS: Given the identified barriers of using SR evidence, an AI tool to facilitate comparison of the relevance of SRs, the search and methodological quality, could help users efficiently choose among SRs and make healthcare decisions.


Asunto(s)
Inteligencia Artificial , Toma de Decisiones , Revisiones Sistemáticas como Asunto , Humanos , Revisiones Sistemáticas como Asunto/métodos , Encuestas y Cuestionarios , Técnicas de Apoyo para la Decisión , Atención a la Salud
2.
BMC Med Res Methodol ; 22(1): 276, 2022 10 26.
Artículo en Inglés | MEDLINE | ID: mdl-36289496

RESUMEN

INTRODUCTION: The exponential growth of published systematic reviews (SRs) presents challenges for decision makers seeking to answer clinical, public health or policy questions. In 1997, an algorithm was created by Jadad et al. to choose the best SR across multiple. Our study aims to replicate author assessments using the Jadad algorithm to determine: (i) if we chose the same SR as the authors; and (ii) if we reach the same results. METHODS: We searched MEDLINE, Epistemonikos, and Cochrane Database of SRs. We included any study using the Jadad algorithm. We used consensus building strategies to operationalise the algorithm and to ensure a consistent approach to interpretation. RESULTS: We identified 21 studies that used the Jadad algorithm to choose one or more SRs. In 62% (13/21) of cases, we were unable to replicate the Jadad assessment and ultimately chose a different SR than the authors. Overall, 18 out of the 21 (86%) independent Jadad assessments agreed in direction of the findings despite 13 having chosen a different SR. CONCLUSIONS: Our results suggest that the Jadad algorithm is not reproducible between users as there are no prescriptive instructions about how to operationalise the algorithm. In the absence of a validated algorithm, we recommend that healthcare providers, policy makers, patients and researchers address conflicts between review findings by choosing the SR(s) with meta-analysis of RCTs that most closely resemble their clinical, public health, or policy question, are the most recent, comprehensive (i.e. number of included RCTs), and at the lowest risk of bias.


Asunto(s)
Algoritmos , Investigadores , Humanos , Sesgo
3.
BMJ Open ; 12(4): e054223, 2022 04 20.
Artículo en Inglés | MEDLINE | ID: mdl-35443948

RESUMEN

INTRODUCTION: An increasing growth of systematic reviews (SRs) presents notable challenges for decision-makers seeking to answer clinical questions. In 1997, an algorithm was created by Jadad to assess discordance in results across SRs on the same question. Our study aims to (1) replicate assessments done in a sample of studies using the Jadad algorithm to determine if the same SR would have been chosen, (2) evaluate the Jadad algorithm in terms of utility, efficiency and comprehensiveness, and (3) describe how authors address discordance in results across multiple SRs. METHODS AND ANALYSIS: We will use a database of 1218 overviews (2000-2020) created from a bibliometric study as the basis of our search for studies assessing discordance (called discordant reviews). This bibliometric study searched MEDLINE (Ovid), Epistemonikos and Cochrane Database of Systematic Reviews for overviews. We will include any study using Jadad (1997) or another method to assess discordance. The first 30 studies screened at the full-text stage by two independent reviewers will be included. We will replicate the authors' Jadad assessments. We will compare our outcomes qualitatively and evaluate the differences between our Jadad assessment of discordance and the authors' assessment. ETHICS AND DISSEMINATION: No ethics approval was required as no human subjects were involved. In addition to publishing in an open-access journal, we will disseminate evidence summaries through formal and informal conferences, academic websites, and across social media platforms. This is the first study to comprehensively evaluate and replicate Jadad algorithm assessments of discordance across multiple SRs.


Asunto(s)
Edición , Proyectos de Investigación , Algoritmos , Bibliometría , Humanos , Revisiones Sistemáticas como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA