Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Otolaryngol Head Neck Surg ; 166(1): 13-22, 2022 01.
Article in English | MEDLINE | ID: mdl-34000906

ABSTRACT

BACKGROUND AND SIGNIFICANCE: Quality measurement can drive improvement in clinical care and allow for easy reporting of quality care by clinicians, but creating quality measures is a time-consuming and costly process. ECRI (formerly Emergency Care Research Institute) has pioneered a process to support systematic translation of clinical practice guidelines into electronic quality measures using a transparent and reproducible pathway. This process could be used to augment or support the development of electronic quality measures of the American Academy of Otolaryngology-Head and Neck Surgery Foundation (AAO-HNSF) and others as the Centers for Medicare and Medicaid Services transitions from the Merit-Based Incentive Payment System (MIPS) to the MIPS Value Pathways for quality reporting. METHODS: We used a transparent and reproducible process to create electronic quality measures based on recommendations from 2 AAO-HNSF clinical practice guidelines (cerumen impaction and allergic rhinitis). Steps of this process include source material review, electronic content extraction, logic development, implementation barrier analysis, content encoding and structuring, and measure formalization. Proposed measures then go through the standard publication process for AAO-HNSF measures. RESULTS: The 2 guidelines contained 29 recommendation statements, of which 7 were translated into electronic quality measures and published. Intermediate products of the guideline conversion process facilitated development and were retained to support review, updating, and transparency. Of the 7 initially published quality measures, 6 were approved as 2018 MIPS measures, and 2 continued to demonstrate a gap in care after a year of data collection. CONCLUSION: Developing high-quality, registry-enabled measures from guidelines via a rigorous reproducible process is feasible. The streamlined process was effective in producing quality measures for publication in a timely fashion. Efforts to better identify gaps in care and more quickly recognize recommendations that would not translate well into quality measures could further streamline this process.


Subject(s)
Cerumen , Ear Diseases/therapy , Otolaryngology , Quality Improvement , Quality Indicators, Health Care , Rhinitis, Allergic/therapy , Humans , Practice Guidelines as Topic , Registries
2.
Int J Technol Assess Health Care ; 37: e13, 2020 Dec 15.
Article in English | MEDLINE | ID: mdl-33317651

ABSTRACT

OBJECTIVE: The Patient-Centered Outcomes Research Institute (PCORI) horizon scanning system is an early warning system for healthcare interventions in development that could disrupt standard care. We report preliminary findings from the patient engagement process. METHODS: The system involves broadly scanning many resources to identify and monitor interventions up to 3 years before anticipated entry into U.S. health care. Topic profiles are written on included interventions with late-phase trial data and circulated with a structured review form for stakeholder comment to determine disruption potential. Stakeholders include patients and caregivers recruited from credible community sources. They view an orientation video, comment on topic profiles, and take a survey about their experience. RESULTS: As of March 2020, 312 monitored topics (some of which were archived) were derived from 3,500 information leads; 121 met the criteria for topic profile development and stakeholder comment. We invited fifty-four patients and caregivers to participate; thirty-nine reviewed at least one report. Their perspectives informed analyst nominations for fourteen topics in two 2019 High Potential Disruption Reports. Thirty-four patient stakeholders completed the user-experience survey. Most agreed (68 percent) or somewhat agreed (26 percent) that they were confident they could provide useful comments. Ninety-four percent would recommend others to participate. CONCLUSIONS: The system has successfully engaged patients and caregivers, who contributed unique and important perspectives that informed the selection of topics deemed to have high potential to disrupt clinical care. Most participants would recommend others to participate in this process. More research is needed to inform optimal patient and caregiver stakeholder recruitment and engagement methods and reduce barriers to participation.


Subject(s)
Caregivers , Patient Outcome Assessment , Patient Participation/methods , United States Agency for Healthcare Research and Quality/organization & administration , Community Participation/methods , Humans , Personnel Selection , Stakeholder Participation , United States
3.
Syst Rev ; 9(1): 73, 2020 04 02.
Article in English | MEDLINE | ID: mdl-32241297

ABSTRACT

BACKGROUND: Improving the speed of systematic review (SR) development is key to supporting evidence-based medicine. Machine learning tools which semi-automate citation screening might improve efficiency. Few studies have assessed use of screening prioritization functionality or compared two tools head to head. In this project, we compared performance of two machine-learning tools for potential use in citation screening. METHODS: Using 9 evidence reports previously completed by the ECRI Institute Evidence-based Practice Center team, we compared performance of Abstrackr and EPPI-Reviewer, two off-the-shelf citations screening tools, for identifying relevant citations. Screening prioritization functionality was tested for 3 large reports and 6 small reports on a range of clinical topics. Large report topics were imaging for pancreatic cancer, indoor allergen reduction, and inguinal hernia repair. We trained Abstrackr and EPPI-Reviewer and screened all citations in 10% increments. In Task 1, we inputted whether an abstract was ordered for full-text screening; in Task 2, we inputted whether an abstract was included in the final report. For both tasks, screening continued until all studies ordered and included for the actual reports were identified. We assessed potential reductions in hypothetical screening burden (proportion of citations screened to identify all included studies) offered by each tool for all 9 reports. RESULTS: For the 3 large reports, both EPPI-Reviewer and Abstrackr performed well with potential reductions in screening burden of 4 to 49% (Abstrackr) and 9 to 60% (EPPI-Reviewer). Both tools had markedly poorer performance for 1 large report (inguinal hernia), possibly due to its heterogeneous key questions. Based on McNemar's test for paired proportions in the 3 large reports, EPPI-Reviewer outperformed Abstrackr for identifying articles ordered for full-text review, but Abstrackr performed better in 2 of 3 reports for identifying articles included in the final report. For small reports, both tools provided benefits but EPPI-Reviewer generally outperformed Abstrackr in both tasks, although these results were often not statistically significant. CONCLUSIONS: Abstrackr and EPPI-Reviewer performed well, but prioritization accuracy varied greatly across reports. Our work suggests screening prioritization functionality is a promising modality offering efficiency gains without giving up human involvement in the screening process.


Subject(s)
Machine Learning , Mass Screening , Evidence-Based Medicine , Humans , Research , Systematic Reviews as Topic
4.
BMC Public Health ; 20(1): 127, 2020 Jan 29.
Article in English | MEDLINE | ID: mdl-31996264

ABSTRACT

BACKGROUND: Pediatric lead exposure in the United States (U.S.) remains a preventable public health crisis. Shareable electronic clinical decision support (CDS) could improve lead screening and management. However, discrepancies between federal, state and local recommendations could present significant challenges for implementation. METHODS: We identified publically available guidance on lead screening and management. We extracted definitions for elevated lead and recommendations for screening, follow-up, reporting, and management. We compared thresholds and level of obligation for management actions. Finally, we assessed the feasibility of development of shareable CDS. RESULTS: We identified 54 guidance sources. States offered different definitions of elevated lead, and recommendations for screening, reporting, follow-up and management. Only 37 of 48 states providing guidance used the Center for Disease Control (CDC) definition for elevated lead. There were 17 distinct management actions. Guidance sources indicated an average of 5.5 management actions, but offered different criteria and levels of obligation for these actions. Despite differences, the recommendations were well-structured, actionable, and encodable, indicating shareable CDS is feasible. CONCLUSION: Current variability across guidance poses challenges for clinicians. Developing shareable CDS is feasible and could improve pediatric lead screening and management. Shareable CDS would need to account for local variability in guidance.


Subject(s)
Decision Support Systems, Clinical/organization & administration , Lead Poisoning/diagnosis , Lead Poisoning/therapy , Mass Screening/standards , Practice Guidelines as Topic/standards , Centers for Disease Control and Prevention, U.S. , Child, Preschool , Feasibility Studies , Healthcare Disparities , Humans , Infant , United States
6.
EGEMS (Wash DC) ; 1(2): 1028, 2013.
Article in English | MEDLINE | ID: mdl-25848573

ABSTRACT

Health technology assessments represent comprehensive summaries of available evidence and information on a technology. They are used by medical decision makers in a variety of ways, including diagnostic testing, treatment selection, care management, patient perspectives, patient safety, insurance coverage, pharmaceutical innovation, equipment planning, device purchasing, and total cost-of-care. Electronic clinical data, which are captured routinely by clinicians and hospitals, are only rarely incorporated into formal health technology assessments. This disconnect reveals a key opportunity. In this paper, we discuss current uses of electronic clinical data, several benefits of including it in health technology assessments, potential pitfalls of that inclusion, and the implications for better medical decisions.

SELECTION OF CITATIONS
SEARCH DETAIL
...