Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Health Res Policy Syst ; 21(1): 45, 2023 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-37280697

RESUMEN

BACKGROUND: Demand for rapid evidence-based syntheses to inform health policy and systems decision-making has increased worldwide, including in low- and middle-income countries (LMICs). To promote use of rapid syntheses in LMICs, the WHO's Alliance for Health Policy and Systems Research (AHPSR) created the Embedding Rapid Reviews in Health Systems Decision-Making (ERA) Initiative. Following a call for proposals, four LMICs were selected (Georgia, India, Malaysia and Zimbabwe) and supported for 1 year to embed rapid response platforms within a public institution with a health policy or systems decision-making mandate. METHODS: While the selected platforms had experience in health policy and systems research and evidence syntheses, platforms were less confident conducting rapid evidence syntheses. A technical assistance centre (TAC) was created from the outset to develop and lead a capacity-strengthening program for rapid syntheses, tailored to the platforms based on their original proposals and needs as assessed in a baseline questionnaire. The program included training in rapid synthesis methods, as well as generating synthesis demand, engaging knowledge users and ensuring knowledge uptake. Modalities included live training webinars, in-country workshops and support through phone, email and an online platform. LMICs provided regular updates on policy-makers' requests and the rapid products provided, as well as barriers, facilitators and impacts. Post-initiative, platforms were surveyed. RESULTS: Platforms provided rapid syntheses across a range of AHPSR themes, and successfully engaged national- and state-level policy-makers. Examples of substantial policy impact were observed, including for COVID-19. Although the post-initiative survey response rate was low, three quarters of those responding felt confident in their ability to conduct a rapid evidence synthesis. Lessons learned coalesced around three themes - the importance of context-specific expertise in conducting reviews, facilitating cross-platform learning, and planning for platform sustainability. CONCLUSIONS: The ERA initiative successfully established rapid response platforms in four LMICs. The short timeframe limited the number of rapid products produced, but there were examples of substantial impact and growing demand. We emphasize that LMICs can and should be involved not only in identifying and articulating needs but as co-designers in their own capacity-strengthening programs. More time is required to assess whether these platforms will be sustained for the long-term.


Asunto(s)
COVID-19 , Países en Desarrollo , Humanos , Política de Salud , Formulación de Políticas , Encuestas y Cuestionarios
2.
J Clin Epidemiol ; 136: 157-167, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33979663

RESUMEN

OBJECTIVES: To evaluate the impact of guidance and training on the inter-rater reliability (IRR), inter-consensus reliability (ICR) and evaluator burden of the Risk of Bias (RoB) in Non-randomized Studies (NRS) of Interventions (ROBINS-I) tool, and the RoB instrument for NRS of Exposures (ROB-NRSE). STUDY DESIGN AND SETTING: In a before-and-after study, seven reviewers appraised the RoB using ROBINS-I (n = 44) and ROB-NRSE (n = 44), before and after guidance and training. We used Gwet's AC1 statistic to calculate IRR and ICR. RESULTS: After guidance and training, the IRR and ICR of the overall bias domain of ROBINS-I and ROB-NRSE improved significantly; with many individual domains showing either a significant (IRR and ICR of ROB-NRSE; ICR of ROBINS-I), or nonsignificant improvement (IRR of ROBINS-I). Evaluator burden significantly decreased after guidance and training for ROBINS-I, whereas for ROB-NRSE there was a slight nonsignificant increase. CONCLUSION: Overall, there was benefit for guidance and training for both tools. We highly recommend guidance and training to reviewers prior to RoB assessments and that future research investigate aspects of guidance and training that are most effective.


Asunto(s)
Investigación Biomédica/normas , Diseño de Investigaciones Epidemiológicas , Variaciones Dependientes del Observador , Revisión por Pares/normas , Proyectos de Investigación/normas , Investigadores/educación , Adulto , Investigación Biomédica/estadística & datos numéricos , Canadá , Estudios Transversales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Psicometría/métodos , Reproducibilidad de los Resultados , Proyectos de Investigación/estadística & datos numéricos , Reino Unido
3.
J Clin Epidemiol ; 128: 140-147, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32987166

RESUMEN

OBJECTIVE: To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. STUDY DESIGN AND SETTING: A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC1 statistic to calculate the IRR and ICR. To measure the evaluator burden, we assessed the total time taken to apply the tool and reach a consensus. RESULTS: For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). CONCLUSIONS: We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.


Asunto(s)
Consenso , Diseño de Investigaciones Epidemiológicas , Investigadores/estadística & datos numéricos , Sesgo , Estudios Transversales , Humanos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Medición de Riesgo
4.
Syst Rev ; 9(1): 32, 2020 02 12.
Artículo en Inglés | MEDLINE | ID: mdl-32051035

RESUMEN

BACKGROUND: A new tool, "risk of bias (ROB) instrument for non-randomized studies of exposures (ROB-NRSE)," was recently developed. It is important to establish consistency in its application and interpretation across review teams. In addition, it is important to understand if specialized training and guidance will improve the reliability in the results of the assessments. Therefore, the objective of this cross-sectional study is to establish the inter-rater reliability (IRR), inter-consensus reliability (ICR), and concurrent validity of the new ROB-NRSE tool. Furthermore, as this is a relatively new tool, it is important to understand the barriers to using this tool (e.g., time to conduct assessments and reach consensus-evaluator burden). METHODS: Reviewers from four participating centers will apprise the ROB of a sample of NRSE publications using ROB-NRSE tool in two stages. For IRR and ICR, two pairs of reviewers will assess the ROB for each NRSE publication. In the first stage, reviewers will assess the ROB without any formal guidance. In the second stage, reviewers will be provided customized training and guidance. At each stage, each pair of reviewers will resolve conflicts and arrive at a consensus. To calculate the IRR and ICR, we will use Gwet's AC1 statistic. For concurrent validity, reviewers will appraise a sample of NRSE publications using both the Newcastle-Ottawa Scale (NOS) and ROB-NRSE tool. We will analyze the concordance between the two tools for similar domains and for the overall judgments using Kendall's tau coefficient. To measure evaluator burden, we will assess the time taken to apply ROB-NRSE tool (without and with guidance), and the NOS. To assess the impact of customized training and guidance on the evaluator burden, we will use the generalized linear models. We will use Microsoft Excel and SAS 9.4, to manage and analyze study data, respectively. DISCUSSION: The quality of evidence from systematic reviews that include NRSE depends partly on the study-level ROB assessments. The findings of this study will contribute to an improved understanding of ROB-NRSE and how best to use it.


Asunto(s)
Sesgo , Consenso , Reproducibilidad de los Resultados , Proyectos de Investigación , Estudios Transversales , Humanos
5.
Syst Rev ; 9(1): 12, 2020 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-31931871

RESUMEN

BACKGROUND: The Cochrane Bias Methods Group recently developed the "Risk of Bias (ROB) in Non-randomized Studies of Interventions" (ROBINS-I) tool to assess ROB for non-randomized studies of interventions (NRSI). It is important to establish consistency in its application and interpretation across review teams. In addition, it is important to understand if specialized training and guidance will improve the reliability of the results of the assessments. Therefore, the objective of this cross-sectional study is to establish the inter-rater reliability (IRR), inter-consensus reliability (ICR), and concurrent validity of ROBINS-I. Furthermore, as this is a relatively new tool, it is important to understand the barriers to using this tool (e.g., time to conduct assessments and reach consensus-evaluator burden). METHODS: Reviewers from four participating centers will appraise the ROB of a sample of NRSI publications using the ROBINS-I tool in two stages. For IRR and ICR, two pairs of reviewers will assess the ROB for each NRSI publication. In the first stage, reviewers will assess the ROB without any formal guidance. In the second stage, reviewers will be provided customized training and guidance. At each stage, each pair of reviewers will resolve conflicts and arrive at a consensus. To calculate the IRR and ICR, we will use Gwet's AC1 statistic. For concurrent validity, reviewers will appraise a sample of NRSI publications using both the New-castle Ottawa Scale (NOS) and ROBINS-I. We will analyze the concordance between the two tools for similar domains and for the overall judgments using Kendall's tau coefficient. To measure the evaluator burden, we will assess the time taken to apply the ROBINS-I (without and with guidance), and the NOS. To assess the impact of customized training and guidance on the evaluator burden, we will use the generalized linear models. We will use Microsoft Excel and SAS 9.4 to manage and analyze study data, respectively. DISCUSSION: The quality of evidence from systematic reviews that include NRS depends partly on the study-level ROB assessments. The findings of this study will contribute to an improved understanding of the ROBINS-I tool and how best to use it.


Asunto(s)
Sesgo , Reproducibilidad de los Resultados , Proyectos de Investigación , Estudios Transversales , Humanos
6.
J Clin Epidemiol ; 106: 121-135, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30312656

RESUMEN

OBJECTIVES: The aim of the article was to identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews. STUDY DESIGN AND SETTING: A systematic review was conducted, searching MEDLINE, EMBASE, and the Cochrane Library from inception to September 1, 2016. Quality appraisal of included studies was undertaken using a modified Quality Assessment of Diagnostic Accuracy Studies 2, and key results on accuracy, reliability, efficiency of a methodology, or impact on results and conclusions were extracted. RESULTS: After screening 5,600 titles and abstracts and 245 full-text articles, 37 studies were included. For screening, studies supported the involvement of two independent experienced reviewers and the use of Google Translate when screening non-English articles. For data abstraction, studies supported involvement of experienced reviewers (especially for continuous outcomes) and two independent reviewers, use of dual monitors, graphical data extraction software, and contacting authors. For quality appraisal, studies supported intensive training, piloting quality assessment tools, providing decision rules for poorly reported studies, contacting authors, and using structured tools if different study designs are included. CONCLUSION: Few studies exist documenting common systematic review practices. Included studies support several systematic review practices. These results provide an updated evidence-base for current knowledge synthesis guidelines and methods requiring further research.


Asunto(s)
Indización y Redacción de Resúmenes , Revisiones Sistemáticas como Asunto , Humanos , Indización y Redacción de Resúmenes/normas , Estudios Transversales , Ensayos Clínicos Controlados Aleatorios como Asunto
7.
J Clin Epidemiol ; 103: 101-111, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30297037

RESUMEN

OBJECTIVES: To illustrate the use of process mining concepts, techniques, and tools to improve the systematic review process. STUDY DESIGN AND SETTING: We simulated review activities and step-specific methods in the process for systematic reviews conducted by one research team over 1 year to generate an event log of activities, with start/end dates, reviewer assignment by expertise, and person-hours worked. Process mining techniques were applied to the event log to "discover" process models, which allowed visual display, animation, or replay of the simulated review activities. Summary statistics were calculated for person-time and timelines. We also analyzed the social networks of team interactions. RESULTS: The 12 simulated reviews included an average of 3,831 titles/abstracts (range: 1,565-6,368) and 20 studies (6-42). The average review completion time was 463 days (range: 289-629) (881 person-hours [range: 243-1,752]). The average person-hours per activity were study selection 26%, data collection 24%, report preparation 23%, and meta-analysis 17%. Social network analyses showed the organizational interaction of team members, including how they worked together to complete review tasks and to hand over tasks upon completion. CONCLUSION: Event log and process mining can be valuable tools for research teams interested in improving and modernizing the systematic review process.


Asunto(s)
Minería de Datos , Proyectos de Investigación/normas , Revisiones Sistemáticas como Asunto , Humanos , Relaciones Interprofesionales , Modelos Teóricos , Mejoramiento de la Calidad , Investigadores , Red Social , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...