Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Circulation ; 138(22): 2456-2468, 2018 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-30571347

RESUMO

BACKGROUND: The HEART Pathway (history, ECG, age, risk factors, and initial troponin) is an accelerated diagnostic protocol designed to identify low-risk emergency department patients with chest pain for early discharge without stress testing or angiography. The objective of this study was to determine whether implementation of the HEART Pathway is safe (30-day death and myocardial infarction rate <1% in low-risk patients) and effective (reduces 30-day hospitalizations) in emergency department patients with possible acute coronary syndrome. METHODS: A prospective pre-post study was conducted at 3 US sites among 8474 adult emergency department patients with possible acute coronary syndrome. Patients included were ≥21 years old, investigated for possible acute coronary syndrome, and had no evidence of ST-segment-elevation myocardial infarction on ECG. Accrual occurred for 12 months before and after HEART Pathway implementation from November 2013 to January 2016. The HEART Pathway accelerated diagnostic protocol was integrated into the electronic health record at each site as an interactive clinical decision support tool. After accelerated diagnostic protocol integration, ED providers prospectively used the HEART Pathway to identify patients with possible acute coronary syndrome as low risk (appropriate for early discharge without stress testing or angiography) or non-low risk (appropriate for further in-hospital evaluation). The primary safety and effectiveness outcomes, death, and myocardial infarction (MI) and hospitalization rates at 30 days were determined from health records, insurance claims, and death index data. RESULTS: Preimplementation and postimplementation cohorts included 3713 and 4761 patients, respectively. The HEART Pathway identified 30.7% as low risk; 0.4% of these patients experienced death or MI within 30 days. Hospitalization at 30 days was reduced by 6% in the postimplementation versus preimplementation cohort (55.6% versus 61.6%; adjusted odds ratio, 0.79; 95% CI, 0.71-0.87). During the index visit, more MIs were detected in the postimplementation cohort (6.6% versus 5.7%; adjusted odds ratio, 1.36; 95% CI, 1.12-1.65). Rates of death or MI during follow-up were similar (1.1% versus 1.3%; adjusted odds ratio, 0.88; 95% CI, 0.58-1.33). CONCLUSIONS: HEART Pathway implementation was associated with decreased hospitalizations, increased identification of index visit MIs, and a very low death and MI rate among low-risk patients. These findings support use of the HEART Pathway to identify low-risk patients who can be safely discharged without stress testing or angiography. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov . Unique identifier: NCT02056964.


Assuntos
Síndrome Coronariana Aguda/diagnóstico , Dor no Peito/etiologia , Síndrome Coronariana Aguda/complicações , Síndrome Coronariana Aguda/patologia , Fatores Etários , Idoso , Algoritmos , Eletrocardiografia , Serviço Hospitalar de Emergência , Feminino , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/complicações , Infarto do Miocárdio/diagnóstico , Infarto do Miocárdio/mortalidade , Infarto do Miocárdio/patologia , Razão de Chances , Alta do Paciente , Estudos Prospectivos , Fatores de Risco , Troponina/análise
3.
EGEMS (Wash DC) ; 7(1): 32, 2019 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-31367649

RESUMO

The well-known hazards of repurposing data make Data Quality (DQ) assessment a vital step towards ensuring valid results regardless of analytical methods. However, there is no systematic process to implement DQ assessments for secondary uses of clinical data. This paper presents DataGauge, a systematic process for designing and implementing DQ assessments to evaluate repurposed data for a specific secondary use. DataGauge is composed of five steps: (1) Define information needs, (2) Develop a formal Data Needs Model (DNM), (3) Use the DNM and DQ theory to develop goal-specific DQ assessment requirements, (4) Extract DNM-specified data, and (5) Evaluate according to DQ requirements. DataGauge's main contribution is integrating general DQ theory and DQ assessment methods into a systematic process. This process supports the integration and practical implementation of existing Electronic Health Record-specific DQ assessment guidelines. DataGauge also provides an initial theory-based guidance framework that ties the DNM to DQ testing methods for each DQ dimension to aid the design of DQ assessments. This framework can be augmented with existing DQ guidelines to enable systematic assessment. DataGauge sets the stage for future systematic DQ assessment research by defining an assessment process, capable of adapting to a broad range of clinical datasets and secondary uses. Defining DataGauge sets the stage for new research directions such as DQ theory integration, DQ requirements portability research, DQ assessment tool development and DQ assessment tool usability.

4.
JAMIA Open ; 2(3): 369-377, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31984369

RESUMO

BACKGROUND: Structured diagnosis (DX) are crucial for secondary use of electronic health record (EHR) data. However, they are often suboptimally recorded. Our previous work showed initial evidence of variable DX recording patterns in oncology charts even after biopsy records are available. OBJECTIVE: We verified this finding's internal and external validity. We hypothesized that this recording pattern would be preserved in a larger cohort of patients for the same disease. We also hypothesized that this effect would vary across subspecialties. METHODS: We extracted DX data from EHRs of patients treated for brain, lung, and pancreatic neoplasms, identified through clinician-led chart reviews. We used statistical methods (i.e., binomial and mixed model regressions) to test our hypotheses. RESULTS: We found variable recording patterns in brain neoplasm DX (i.e., larger number of distinct DX-OR = 2.2, P < 0.0001, higher descriptive specificity scores-OR = 1.4, P < 0.0001-and much higher entropy after the BX-OR = 3.8 P = 0.004 and OR = 8.0, P < 0.0001), confirming our initial findings. We also found strikingly different patterns for lung and pancreas DX. Although both seemed to have much lower DX sequence entropy after the BX-OR = 0.198, P = 0.015 and OR = 0.099, P = 0.015, respectively compared to OR = 3.8 P = 0.004). We also found statistically significant differences between the brain dataset and both the lung (P < 0.0001) and pancreas (0.009

5.
AMIA Jt Summits Transl Sci Proc ; 2019: 325-334, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31258985

RESUMO

The use of diagnosis (DX) data is crucial to secondary use of electronic health record (EHR) data, yet accessible structured DX data often lack in accuracy. DX descriptions associated with structured DX codes vary even after recording biopsy results; this may indicate poor data quality. We hypothesized that biopsy reports in cancer care charts do not improve intrinsic DX data quality. We analyzed DX data for a manually well-annotated cohort of patients with brain neoplasms. We built statistical models to predict the number of fully-accurate (i.e., correct neoplasm type and anatomical location) and inaccurate DX (i.e. type or location contradicts cohort data) descriptions. We found some evidence of statistically larger numbers of fully-accurate (RR=3.07, p=0.030) but stronger evidence of much larger numbers of inaccurate DX (RR=12.3, p=0.001 and RR=19.6, p<0.0001) after biopsy result recording. Still, 65.9% of all DX records were neither fully-accurate nor fully-inaccurate. These results suggest EHRs must be modified to support more reliable DX data recording and secondary use of EHR data.

6.
Artigo em Inglês | MEDLINE | ID: mdl-29888044

RESUMO

Diagnostic codes are crucial for analyses of electronic health record (EHR) data but their accuracy and precision are often lacking. Although providers enter precise diagnoses into progress notes, billing standards may limit the particularity of a diagnostic code. Variability also arises from the creation of multiple descriptions for a particular diagnostic code. We hypothesized that the variability of diagnostic codes would be greater before surgical pathology results were recorded in the medical record. A well annotated cohort of patients with brain neoplasms was studied. After diagnostic pathology reporting, the odds of more distinct diagnostic descriptions were 2.30 times higher (p=0.00358), entropy in diagnostic sequences was 2.26 times higher (p=0.0259) and entropy in diagnostic precision scores was 15.5 times higher (p=0.0324). Although diagnostic codes became more distinct on average after diagnostic pathology reporting, there was a paradoxical increase in the variability of the codes selected. Researchers must be aware of the inconsistencies and variability in particularity in structured diagnostic coding despite the presence of a definitive diagnosis.

7.
JMIR Med Inform ; 6(4): e10780, 2018 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-30348631

RESUMO

BACKGROUND: Electronic, personalized clinical decision support tools to optimize glycated hemoglobin (HbA1c) screening are lacking. Current screening guidelines are based on simple, categorical rules developed for populations of patients. Although personalized diabetes risk calculators have been created, none are designed to predict current glycemic status using structured data commonly available in electronic health records (EHRs). OBJECTIVE: The goal of this project was to create a mathematical equation for predicting the probability of current elevations in HbA1c (≥5.7%) among patients with no history of hyperglycemia using readily available variables that will allow integration with EHR systems. METHODS: The reduced model was compared head-to-head with calculators created by Baan and Griffin. Ten-fold cross-validation was used to calculate the bias-adjusted prediction accuracy of the new model. Statistical analyses were performed in R version 3.2.5 (The R Foundation for Statistical Computing) using the rms (Regression Modeling Strategies) package. RESULTS: The final model to predict an elevated HbA1c based on 22,635 patient records contained the following variables in order from most to least importance according to their impact on the discriminating accuracy of the model: age, body mass index, random glucose, race, serum non-high-density lipoprotein, serum total cholesterol, estimated glomerular filtration rate, and smoking status. The new model achieved a concordance statistic of 0.77 which was statistically significantly better than prior models. The model appeared to be well calibrated according to a plot of the predicted probabilities versus the prevalence of the outcome at different probabilities. CONCLUSIONS: The calculator created for predicting the probability of having an elevated HbA1c significantly outperformed the existing calculators. The personalized prediction model presented in this paper could improve the efficiency of HbA1c screening initiatives.

8.
Artigo em Inglês | MEDLINE | ID: mdl-26306235

RESUMO

Large clinical datasets can be used to discover and monitor drug side effects. Many previous studies analyzed symptom data as discrete events. However, some drug side effects are inferred from continuous variables such as weight or blood pressure. These require additional assumptions for analysis. For example, we can define positive/negative thresholds and time windows within which we expect to see the side effect. In this paper, we discuss the impact of such assumptions on the ability to detect known continuous drug side effects using statistical and visualization techniques. Taking the case of prednisone exposure and weight gain reflected in real EHR data, we found that temporal windowing greatly affected the ability to detect the expected effect. Categorization of the exposure variable improved side effect detection but negatively impacted model fit. To avoid false positive and false negative conclusions from clinical data reuse, studies reusing clinical data should determine the sensitivity of their findings to alternative analytic assumptions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA