Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
JAMIA Open ; 7(2): ooae032, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38660616

RESUMO

Objective: The objective was to identify information loss that could affect clinical care in laboratory data transmission between 2 health care institutions via a Health Information Exchange platform. Materials and Methods: Data transmission results of 9 laboratory tests, including LOINC codes, were compared in the following: between sending and receiving electronic health record (EHR) systems, the individual Health Level Seven International (HL7) Version 2 messages across the instrument, laboratory information system, and sending EHR. Results: Loss of information for similar tests indicated the following potential patient safety issues: (1) consistently missing specimen source; (2) lack of reporting of analytical technique or instrument platform; (3) inconsistent units and reference ranges; (4) discordant LOINC code use; and (5) increased complexity with multiple HL7 versions. Discussion and Conclusions: Using an HIE with standard messaging, SHIELD (Systemic Harmonization and Interoperability Enhancement for Laboratory Data) recommendations, and enhanced EHR functionality to support necessary data elements would yield consistent test identification and result value transmission.

2.
J Am Med Inform Assoc ; 29(8): 1372-1380, 2022 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-35639494

RESUMO

OBJECTIVE: Assess the effectiveness of providing Logical Observation Identifiers Names and Codes (LOINC®)-to-In Vitro Diagnostic (LIVD) coding specification, required by the United States Department of Health and Human Services for SARS-CoV-2 reporting, in medical center laboratories and utilize findings to inform future United States Food and Drug Administration policy on the use of real-world evidence in regulatory decisions. MATERIALS AND METHODS: We compared gaps and similarities between diagnostic test manufacturers' recommended LOINC® codes and the LOINC® codes used in medical center laboratories for the same tests. RESULTS: Five medical centers and three test manufacturers extracted data from laboratory information systems (LIS) for prioritized tests of interest. The data submission ranged from 74 to 532 LOINC® codes per site. Three test manufacturers submitted 15 LIVD catalogs representing 26 distinct devices, 6956 tests, and 686 LOINC® codes. We identified mismatches in how medical centers use LOINC® to encode laboratory tests compared to how test manufacturers encode the same laboratory tests. Of 331 tests available in the LIVD files, 136 (41%) were represented by a mismatched LOINC® code by the medical centers (chi-square 45.0, 4 df, P < .0001). DISCUSSION: The five medical centers and three test manufacturers vary in how they organize, categorize, and store LIS catalog information. This variation impacts data quality and interoperability. CONCLUSION: The results of the study indicate that providing the LIVD mappings was not sufficient to support laboratory data interoperability. National implementation of LIVD and further efforts to promote laboratory interoperability will require a more comprehensive effort and continuing evaluation and quality control.


Assuntos
COVID-19 , Sistemas de Informação em Laboratório Clínico , Humanos , Laboratórios , Logical Observation Identifiers Names and Codes , SARS-CoV-2 , Estados Unidos
3.
AMIA Annu Symp Proc ; 2022: 329-338, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37128382

RESUMO

Our aim is to demonstrate a general-purpose data and knowledge validation approach that enables reproducible metrics for data and knowledge quality and safety. We researched widely accepted statistical process control methods from high-quality, high-safety industries and applied them to pharmacy prescription data being migrated between EHRs. Natural language medication instructions from prescriptions were independently categorized by two terminologists as a first step toward encoding those medication instructions using standardized terminology. Overall, the weighted average of medication instructions that were matched by reviewers was 43%, with strong agreement between reviewers for short instructions (K=0.82) and long instructions (K=0.85), and moderate agreement for medium instructions (K=0.61). Category definitions will be refined in future work to mitigate discrepancies. We recommend incorporating appropriate statistical tests, such as evaluating inter-rater and intra-rater reliability and bivariate comparison of reviewer agreement over an adequate statistical sample, when developing benchmarks for health data and knowledge quality and safety.


Assuntos
Farmácia , Confiança , Humanos , Reprodutibilidade dos Testes , Benchmarking , Preparações Farmacêuticas
4.
EGEMS (Wash DC) ; 6(1): 17, 2018 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-30094289

RESUMO

OBJECTIVE: To understand the impact of varying measurement period on the calculation of electronic Clinical Quality Measures (eCQMs). BACKGROUND: eCQMs have increased in importance in value-based programs, but accurate and timely measurement has been slow. This has required flexibility in key measure characteristics, including measurement period, the timeframe the measurement covers. The effects of variable measurement periods on accuracy and variability are not clear. METHODS: 209 practices were asked to extract and submit four eCQMs from their Electronic Health Records on a quarterly basis using a 12-month measurement period. Quarterly submissions were collected via REDCap. The measurement periods of the survey data were categorized into non-standard (3, 6, 9 months and other) and standard periods (12 months). For comparison, patient-level data from three clinics were collected and calculated in an eCQM registry to measure the impact of varying measurement periods. We assessed the central tendency, shape of the distributions, and variability across the four measures. Analysis of variance (ANOVA) was conducted to analyze the differences among standard and non-standard measurement period means, and variation among these groups. RESULTS: Of 209 practices, 191 (91 percent) submitted data over three quarters. Of the 546 total submissions, 173 had non-standard measurement periods. Differences between measures with standard versus non-standard periods ranged from -3.3 percent to 14.2 percent between clinics (p < .05 for 3 of 4), using the patient-level data yielded deltas of -1.6 percent to 0.6 percent when comparing non-standard and standard periods. CONCLUSION: Variations in measurement periods were associated with variation in performance between clinics for 3 of the 4 eCQMs, but did not have significant differences when calculated within clinics. Variations from standard measurement periods may reflect poor data quality and accuracy.

5.
J Am Board Fam Med ; 31(3): 398-409, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29743223

RESUMO

PURPOSE: Practice facilitators ("facilitators") can play an important role in supporting primary care practices in performing quality improvement (QI), but they need complete and accurate clinical performance data from practices' electronic health records (EHR) to help them set improvement priorities, guide clinical change, and monitor progress. Here, we describe the strategies facilitators use to help practices perform QI when complete or accurate performance data are not available. METHODS: Seven regional cooperatives enrolled approximately 1500 small-to-medium-sized primary care practices and 136 facilitators in EvidenceNOW, the Agency for Healthcare Research and Quality's initiative to improve cardiovascular preventive services. The national evaluation team analyzed qualitative data from online diaries, site visit field notes, and interviews to discover how facilitators worked with practices on EHR data challenges to obtain and use data for QI. RESULTS: We found facilitators faced practice-level EHR data challenges, such as a lack of clinical performance data, partial or incomplete clinical performance data, and inaccurate clinical performance data. We found that facilitators responded to these challenges, respectively, by using other data sources or tools to fill in for missing data, approximating performance reports and generating patient lists, and teaching practices how to document care and confirm performance measures. In addition, facilitators helped practices communicate with EHR vendors or health systems in requesting data they needed. Overall, facilitators tailored strategies to fit the individual practice and helped build data skills and trust. CONCLUSION: Facilitators can use a range of strategies to help practices perform data-driven QI when performance data are inaccurate, incomplete, or missing. Support is necessary to help practices, particularly those with EHR data challenges, build their capacity for conducting data-driven QI that is required of them for participating in practice transformation and performance-based payment programs. It is questionable how practices with data challenges will perform in programs without this kind of support.


Assuntos
Registros Eletrônicos de Saúde/organização & administração , Atenção Primária à Saúde/organização & administração , Melhoria de Qualidade , Pesquisa Qualitativa , Estados Unidos
6.
AMIA Annu Symp Proc ; 2017: 575-584, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29854122

RESUMO

Clinical quality measures (CQMs) aim to identify gaps in care and to promote evidence-based guidelines. Official CQM definitions consist of a measure's logic and grouped, standardized codes to define key concepts. In this study, we used the official CQM update process to understand how CQMs' meanings change over time. First, we identified differences between the narrative description, logic, and the vocabulary specifications offour standardized CQMs' definitions in subsequent versions (2015, 2016, and 2017). Next, we implemented the various versions in a quality measure calculation registry to understand how the differences affected calculated prevalence of risk and measure performance. Global performance rates changed up to 5.32%, and an increase of up to 28% new patients was observed for key conditions between versions. Updates to definitions that change a measure's logic and choices to include/exclude codes in value set vocabularies changes measurement of quality and likely introduces variation by implementation.


Assuntos
Controle de Qualidade , Indicadores de Qualidade em Assistência à Saúde , Vocabulário Controlado , Adolescente , Adulto , Centers for Medicare and Medicaid Services, U.S. , Confiabilidade dos Dados , Humanos , Narração , Estados Unidos
7.
EGEMS (Wash DC) ; 5(1): 19, 2017 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-29881739

RESUMO

OBJECTIVE: To understand the impact of distinct concept to value set mapping on the measurement of quality of care. BACKGROUND: Clinical quality measures (CQMs) intend to measure the quality of healthcare services provided, and to help promote evidence-based therapies. Most CQMs consist of grouped codes from vocabularies - or 'value sets' - that represent the unique identifiers (i.e., object identifiers), concepts (i.e., value set names), and concept definitions (i.e., code groups) that define a measure's specifications. In the development of a statin therapy CQM, two unique value sets were created by independent measure developers for the same global concepts. METHODS: We first identified differences between the two value set specifications of the same CQM. We then implemented the various versions in a quality measure calculation registry to understand how the differences affected calculated prevalence of risk and measure performance. RESULTS: Global performance rates only differed by 0.8%, but there were up to 2.3 times as many patients included with key conditions, and differing performance rates of 7.5% for patients with 'myocardial infarction' and 3.5% for those with 'ischemic vascular disease'. CONCLUSION: The decisions CQM developers make about which concepts and code groups to include or exclude in value set vocabularies can lead to inaccuracies in the measurement of quality of care. One solution is that developers could provide rationale for these decisions. Endorsements are needed to encourage system vendors, payers, informaticians, and clinicians to collaborate in the creation of more integrated terminology sets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA