Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Ann Hepatol ; 27(2): 100582, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34808392

RESUMO

INTRODUCTION: Recommendations on non-invasive imaging to assess pre-operative cardiac risk among liver transplant candidates vary amongst societal guidelines and individual institutional practices. In 2018, a standardized pre-transplant coronary evaluation protocol was established at Beth Israel Deaconess Medical Center, Boston MA, to ensure appropriate and consistent pre-operative testing was performed. METHODS: All patients who underwent liver transplant evaluation between January 1st, 2016 and December 31st, 2019, were retrospectively analyzed and divided into three cohorts; before the introduction of the protocol (prior to 2018), initial protocol favoring invasive coronary angiography (ICA) (2018), and amended protocol favoring coronary computed tomography angiography (CCTA) (post-2018). We described clinical characteristics, candidacy for transplant, and cardiovascular complications during follow-up. As an unadjusted exploratory analysis, the Cochran-Armitage Exact Trend Test was used to examine univariate differences across time. RESULTS: A total of 462 patients underwent liver transplant evaluation during the study period. Among these, 218 (47.2%) patients underwent stress test, 50 (10.8%) underwent CCTA, and 68 (14.8%) underwent ICA. Across the three time periods, there was an increase in the proportion of CCTAs performed (3%, 6.3%, and 26.3% respectively; p <0.001) and proportion of patients diagnosed with obstructive CAD using CCTA (0%, 30%, and 51.4% respectively; p = 0.04). There was no significant difference in post-transplant cardiac complications among patients evaluated before 2018, during 2018, and after 2018 (5.9% vs. 5.6 vs. 6.0%; p=1.0). CONCLUSION: Our findings suggest it is reasonable to shift practice to a less invasive approach utilizing CCTA or nuclear stress testing when assessing liver transplant candidates at increased cardiovascular risk.


Assuntos
Doença da Artéria Coronariana , Transplante de Fígado , Estudos de Coortes , Angiografia por Tomografia Computadorizada/métodos , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Doença da Artéria Coronariana/cirurgia , Humanos , Transplante de Fígado/efeitos adversos , Valor Preditivo dos Testes , Estudos Retrospectivos , Medição de Risco
2.
JMIR Med Inform ; 12: e53625, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38842167

RESUMO

Background: Despite restrictive opioid management guidelines, opioid use disorder (OUD) remains a major public health concern. Machine learning (ML) offers a promising avenue for identifying and alerting clinicians about OUD, thus supporting better clinical decision-making regarding treatment. Objective: This study aimed to assess the clinical validity of an ML application designed to identify and alert clinicians of different levels of OUD risk by comparing it to a structured review of medical records by clinicians. Methods: The ML application generated OUD risk alerts on outpatient data for 649,504 patients from 2 medical centers between 2010 and 2013. A random sample of 60 patients was selected from 3 OUD risk level categories (n=180). An OUD risk classification scheme and standardized data extraction tool were developed to evaluate the validity of the alerts. Clinicians independently conducted a systematic and structured review of medical records and reached a consensus on a patient's OUD risk level, which was then compared to the ML application's risk assignments. Results: A total of 78,587 patients without cancer with at least 1 opioid prescription were identified as follows: not high risk (n=50,405, 64.1%), high risk (n=16,636, 21.2%), and suspected OUD or OUD (n=11,546, 14.7%). The sample of 180 patients was representative of the total population in terms of age, sex, and race. The interrater reliability between the ML application and clinicians had a weighted kappa coefficient of 0.62 (95% CI 0.53-0.71), indicating good agreement. Combining the high risk and suspected OUD or OUD categories and using the review of medical records as a gold standard, the ML application had a corrected sensitivity of 56.6% (95% CI 48.7%-64.5%) and a corrected specificity of 94.2% (95% CI 90.3%-98.1%). The positive and negative predictive values were 93.3% (95% CI 88.2%-96.3%) and 60.0% (95% CI 50.4%-68.9%), respectively. Key themes for disagreements between the ML application and clinician reviews were identified. Conclusions: A systematic comparison was conducted between an ML application and clinicians for identifying OUD risk. The ML application generated clinically valid and useful alerts about patients' different OUD risk levels. ML applications hold promise for identifying patients at differing levels of OUD risk and will likely complement traditional rule-based approaches to generating alerts about opioid safety issues.

3.
JAMIA Open ; 6(2): ooad031, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37181729

RESUMO

Objective: To describe a user-centered approach to develop, pilot test, and refine requirements for 3 electronic health record (EHR)-integrated interventions that target key diagnostic process failures in hospitalized patients. Materials and Methods: Three interventions were prioritized for development: a Diagnostic Safety Column (DSC) within an EHR-integrated dashboard to identify at-risk patients; a Diagnostic Time-Out (DTO) for clinicians to reassess the working diagnosis; and a Patient Diagnosis Questionnaire (PDQ) to gather patient concerns about the diagnostic process. Initial requirements were refined from analysis of test cases with elevated risk predicted by DSC logic compared to risk perceived by a clinician working group; DTO testing sessions with clinicians; PDQ responses from patients; and focus groups with clinicians and patient advisors using storyboarding to model the integrated interventions. Mixed methods analysis of participant responses was used to identify final requirements and potential implementation barriers. Results: Final requirements from analysis of 10 test cases predicted by the DSC, 18 clinician DTO participants, and 39 PDQ responses included the following: DSC configurable parameters (variables, weights) to adjust baseline risk estimates in real-time based on new clinical data collected during hospitalization; more concise DTO wording and flexibility for clinicians to conduct the DTO with or without the patient present; and integration of PDQ responses into the DSC to ensure closed-looped communication with clinicians. Analysis of focus groups confirmed that tight integration of the interventions with the EHR would be necessary to prompt clinicians to reconsider the working diagnosis in cases with elevated diagnostic error (DE) risk or uncertainty. Potential implementation barriers included alert fatigue and distrust of the risk algorithm (DSC); time constraints, redundancies, and concerns about disclosing uncertainty to patients (DTO); and patient disagreement with the care team's diagnosis (PDQ). Discussion: A user-centered approach led to evolution of requirements for 3 interventions targeting key diagnostic process failures in hospitalized patients at risk for DE. Conclusions: We identify challenges and offer lessons from our user-centered design process.

4.
Semin Thorac Cardiovasc Surg ; 34(3): 947-957, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34111554

RESUMO

The evidence for use of direct oral anticoagulants (DOACs) in the management of post-operative cardiac surgery atrial fibrillation is limited and mostly founded on clinical trials that excluded this patient population. We performed a systematic review and meta-analysis of clinical trials and observational studies to evaluate the hypothesis that DOACs are safe compared to warfarin for the anticoagulation of patients with post-operative cardiac surgery atrial fibrillation. We searched PubMed, EMBASE, Web of Science, clinicaltrials.gov, and the Cochrane Library for clinical trials and observational studies comparing DOAC with warfarin in patients ≥18 years old who had post-cardiac surgery atrial fibrillation. Primary outcomes included stroke, systemic embolization, bleeding, and mortality. We performed a random-effects meta-analysis of all outcomes. The meta-analysis for the primary outcomes showed significantly lower risk of stroke with DOAC use (6 studies, 7143 patients, RR 0.64; 95% CI 0.50-0.81, I2: 0.0%) compared to warfarin, a trend towards lower risk of systemic embolization (4 studies, 7289 patients, RR 0.64, 95% CI 0.41-1.01, I2: 31.99%) and similar risks of bleeding (14 studies, 10182 patients, RR 0.91; 95% CI 0.74-1.10, I2: 26.6%) and mortality (12 studies, 9843 patients, relative risk [RR] 1.01; 95% CI 0.74-1.37, I2: 26.5%). Current evidence suggests that DOACs, compared to warfarin, in the management of atrial fibrillation after cardiac surgery is associated with lower risk of stroke and a strong trend for lower risk of systemic embolization, and no evidence of increased risk for hospital readmission, bleeding and mortality.


Assuntos
Fibrilação Atrial , Procedimentos Cirúrgicos Cardíacos , Acidente Vascular Cerebral , Administração Oral , Adolescente , Anticoagulantes/efeitos adversos , Fibrilação Atrial/complicações , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/tratamento farmacológico , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Hemorragia/induzido quimicamente , Humanos , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/etiologia , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento , Varfarina/efeitos adversos
5.
Diagnosis (Berl) ; 9(4): 446-457, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-35993878

RESUMO

OBJECTIVES: To test a structured electronic health record (EHR) case review process to identify diagnostic errors (DE) and diagnostic process failures (DPFs) in acute care. METHODS: We adapted validated tools (Safer Dx, Diagnostic Error Evaluation Research [DEER] Taxonomy) to assess the diagnostic process during the hospital encounter and categorized 13 postulated e-triggers. We created two test cohorts of all preventable cases (n=28) and an equal number of randomly sampled non-preventable cases (n=28) from 365 adult general medicine patients who expired and underwent our institution's mortality case review process. After excluding patients with a length of stay of more than one month, each case was reviewed by two blinded clinicians trained in our process and by an expert panel. Inter-rater reliability was assessed. We compared the frequency of DE contributing to death in both cohorts, as well as mean DPFs and e-triggers for DE positive and negative cases within each cohort. RESULTS: Twenty-seven (96.4%) preventable and 24 (85.7%) non-preventable cases underwent our review process. Inter-rater reliability was moderate between individual reviewers (Cohen's kappa 0.41) and substantial with the expert panel (Cohen's kappa 0.74). The frequency of DE contributing to death was significantly higher for the preventable compared to the non-preventable cohort (56% vs. 17%, OR 6.25 [1.68, 23.27], p<0.01). Mean DPFs and e-triggers were significantly and non-significantly higher for DE positive compared to DE negative cases in each cohort, respectively. CONCLUSIONS: We observed substantial agreement among final consensus and expert panel reviews using our structured EHR case review process. DEs contributing to death associated with DPFs were identified in institutionally designated preventable and non-preventable cases. While e-triggers may be useful for discriminating DE positive from DE negative cases, larger studies are required for validation. Our approach has potential to augment institutional mortality case review processes with respect to DE surveillance.


Assuntos
Reprodutibilidade dos Testes , Adulto , Humanos , Espectroscopia de Ressonância de Spin Eletrônica , Erros de Diagnóstico/prevenção & controle
6.
Diagnosis (Berl) ; 9(1): 77-88, 2021 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-34420276

RESUMO

OBJECTIVES: We describe an approach for analyzing failures in diagnostic processes in a small, enriched cohort of general medicine patients who expired during hospitalization and experienced medical error. Our objective was to delineate a systematic strategy for identifying frequent and significant failures in the diagnostic process to inform strategies for preventing adverse events due to diagnostic error. METHODS: Two clinicians independently reviewed detailed records of purposively sampled cases identified from established institutional case review forums and assessed the likelihood of diagnostic error using the Safer Dx instrument. Each reviewer used the modified Diagnostic Error Evaluation and Research (DEER) taxonomy, revised for acute care (41 possible failure points across six process dimensions), to characterize the frequency of failure points (FPs) and significant FPs in the diagnostic process. RESULTS: Of 166 cases with medical error, 16 were sampled: 13 (81.3%) had one or more diagnostic error(s), and a total of 113 FPs and 30 significant FPs were identified. A majority of significant FPs (63.3%) occurred in "Diagnostic Information and Patient Follow-up" and "Patient and Provider Encounter and Initial Assessment" process dimensions. Fourteen (87.5%) cases had a significant FP in at least one of these dimensions. CONCLUSIONS: Failures in the diagnostic process occurred across multiple dimensions in our purposively sampled cohort. A systematic analytic approach incorporating the modified DEER taxonomy, revised for acute care, offered critical insights into key failures in the diagnostic process that could serve as potential targets for preventative interventions.


Assuntos
Erros Médicos , Erros de Diagnóstico/prevenção & controle , Espectroscopia de Ressonância de Spin Eletrônica , Humanos , Erros Médicos/prevenção & controle
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa