RESUMO
BACKGROUND AND PURPOSE: In spine neurosurgery practice, patient-reported outcome measures (PROMs) are tools used to convey information about a patient's health experience and are an integral component of a clinician's decision-making process as they help guide treatment strategies to improve outcomes and minimize pain. Currently, there is limited research showing effective integration strategies of PROMs into electronic medical records. This study aims to provide a framework for other healthcare systems by outlining the process from start to finish in seven Hartford Healthcare Neurosurgery outpatient spine clinics throughout the state of Connecticut. METHODS: On March 1, 2021, a pilot implementation program began in one clinic and on July 1, 2021, all outpatient clinics were implementing the revised clinical workflow that included the electronic collection of PROMs within the electronic health record (EHR). A retrospective chart analysis studied all adult (18+) new patient visits in seven outpatient clinics by comparing the rates of PROMs collection in Half 1 (March 1, 2021-August 31, 2022) and in Half 2 (September 1, 2022-February 28, 2022) across all sites. Additionally, patient characteristics were studied to identify any variables that may lead to higher rates of collection. RESULTS: During the study period, 3528 new patient visits were analyzed. There was a significant change in rates of PROMs collection across all departments between H1 and H2 (p < 0.05). Additional significant predictors for PROMs collection were the sex and ethnicity of the patient as well as the provider type for the visit (p < 0.05). CONCLUSIONS: This study proved that implementing the electronic collection of PROMs into an already existing clinical workflow reduces previously identified collection barriers and enables PROMs collection rates that meet or exceed current benchmarks. Our results provide a successful step-by-step framework for other spine neurosurgery clinics to implement a similar approach.
Assuntos
Registros Eletrônicos de Saúde , Dor , Adulto , Humanos , Estudos Retrospectivos , Coluna Vertebral , Medidas de Resultados Relatados pelo PacienteRESUMO
Objective: To determine information transfer during simulated shift-to-shift intraoperative anesthesia handoffs and the benefits of using a handoff tool. Patients and Methods: Anesthesiology residents and faculty participating in simulation-based education in a simulation center on April 6 and 20, 2017, and April 11 and 25, 2019. We used a fixed clinical scenario to compare information transfer in multiple sequential simulated handoff chains conducted from memory or guided by an electronic medical record generated tool. For each handoff, 25 informational elements were assessed on a discrete 0-2 scale generating a possible information retention score of 50. Time to handoff completion and number of clarifications requested by the receiver were also determined. Results: We assessed 32 handoff chains with up to 4 handoffs per chain. When both groups were combined, the mean information retention score was 31 of 50 (P<.001) for the first clinician and declined by an average of 4 points per handoff (P<.001). The handoff tool improved information retention by almost 7 points (P=.002), but did not affect the rate of information degradation (P=.38). Handoff time remained constant for the intervention group (P=.67), but declined by 2 minutes/handoff (P<.001) in the control group, which required 7 more clarifications/handoff (P=.003). In the control group, 7 of 16 (44%) handoff chains contained one or more information retention scores below the lowest score of the entire intervention group (P=.007). Conclusion: Clinical handoffs are accompanied by degradation of information that is only partially reduced by use of a handoff tool, which appears to prevent extremes of information degradation.
RESUMO
Since the 1990s, Veterans Health Administration (VHA) has maintained a registry of Veterans with Spinal Cord Injuries and Disorders (SCI/Ds) to guide clinical care, policy, and research. Historically, methods for collecting and recording data for the VHA SCI/D Registry (VSR) have required significant time, cost, and staffing to maintain, were susceptible to missing data, and caused delays in aggregation and reporting. Each subsequent data collection method was aimed at improving these issues over the last several decades. This paper describes the development and validation of a case-finding and data-capture algorithm that uses primary clinical data, including diagnoses and utilization across 9 million VHA electronic medical records, to create a comprehensive registry of living and deceased Veterans seen for SCI/D services since 2012. A multi-step process was used to develop and validate a computer algorithm to create a comprehensive registry of Veterans with SCI/D whose records are maintained in the enterprise wide VHA Corporate Data Warehouse. Chart reviews and validity checks were used to validate the accuracy of cases that were identified using the new algorithm. An initial cohort of 28,202 living and deceased Veterans with SCI/D who were enrolled in VHA care from 10/1/2012 through 9/30/2017 was validated. Tables, reports, and charts using VSR data were developed to provide operational tools to study, predict, and improve targeted management and care for Veterans with SCI/Ds. The modernized VSR includes data on diagnoses, qualifying fiscal year, recent utilization, demographics, injury, and impairment for 38,022 Veterans as of 11/2/2022. This establishes the VSR as one of the largest ongoing longitudinal SCI/D datasets in North America and provides operational reports for VHA population health management and evidence-based rehabilitation. The VSR also comprises one of the only registries for individuals with non-traumatic SCI/Ds and holds potential to advance research and treatment for multiple sclerosis (MS), amyotrophic lateral sclerosis (ALS), and other motor neuron disorders with spinal cord involvement. Selected trends in VSR data indicate possible differences in the future lifelong care needs of Veterans with SCI/Ds. Future collaborative research using the VSR offers opportunities to contribute to knowledge and improve health care for people living with SCI/Ds.
RESUMO
Objective: To improve the care for pediatric oncology patients with neutropenic fever who present to the emergency department (ED) by administering appropriate empiric antibiotics within 60 minutes of arrival. Patients and Methods: We focused on improving the care for pediatric oncology patients at risk of neutropenia who presented to the ED with concern for fever. Our baseline adherence to the administration of empiric antibiotics within 60 minutes for this population was 53% (76/144) from January 1, 2010, to December 21, 2014. During 2015, we reviewed data monthly, finding 73% adherence. We used the Lean methodology to identify the process waste, completed a value-stream map with input from multidisciplinary stakeholders, and convened a root cause analysis to identify causes for delay. The 4 causes were as follows: (1) lack of staff awareness; (2) missing patient information in electronic medical record; (3) practice variation; and 4) lack of clear prioritization of laboratory draws. We initiated Plan-Do-Study-Act cycles to achieve our goal of 80% of patients receiving appropriate empiric antibiotics within 60 minutes of arrival in the ED. Results: Five Plan-Do-Study-Act cycles were completed, focusing on the following: (1) timely identification of patients by utilizing the electronic medical record to initiate a page to the care team; (2) creation of a streamlined intravascular access process; (3) practice standardization; (4) convenient access to appropriate antibiotics; and (5) care team education. Timely antibiotic administration increased from 73%-95% of patients by 2018. More importantly, the adherence was sustained to greater than 90% through 2021. Conclusion: A structured and multifaceted approach using quality improvement methodologies can achieve and sustain improved patient care outcomes in the ED.
RESUMO
Background: Updated American or Chinese guidelines recommended calculating atherosclerotic cardiovascular disease (ASCVD) risk using the Pooled Cohort Equations (PCE) or Prediction for Atherosclerotic Cardiovascular Disease Risk in China (China-PAR) models; however, evidence on performance of both models in Asian populations is limited. Objectives: The authors aimed to evaluate the accuracy of the PCE or China-PAR models in a Chinese contemporary cohort. Methods: Data were extracted from the CHERRY (CHinese Electronic health Records Research in Yinzhou) study. Participants aged 40 to 79 years without prior ASCVD at baseline from 2010 to 2016 were included. ASCVD was defined as nonfatal or fatal stroke, nonfatal myocardial infarction, and cardiovascular death. Models were assessed for discrimination and calibration. Results: Among 226,406 participants, 5362 (2.37%) adults developed a first ASCVD event during a median of 4.60 years of follow-up. Both models had good discrimination: C-statistics in men were 0.763 (95% confidence interval [CI]: 0.754-0.773) for PCE and 0.758 (95% CI: 0.749-0.767) for China-PAR; C-statistics in women were 0.820 (95% CI: 0.812-0.829) for PCE and 0.811 (95% CI: 0.802-0.819) for China-PAR. The China-PAR model underpredicted risk by 20% in men and by 40% in women, especially in the highest-risk groups. However, PCE overestimated by 63% in men and inversely underestimated the risk by 34% in women with poor calibration (both P < 0.001). After recalibration, observed and predicted risks by recalibrated PCE were better aligned. Conclusions: In this large-scale population-based study, both PCE and China-PAR had good discrimination in 5-year ASCVD risk prediction. China-PAR outperformed PCE in calibration, whereas recalibration equalized the performance of PCE and China-PAR. Further specific models are needed to improve accuracy in the highest-risk groups.
RESUMO
Objective: To evaluate the effectiveness and safety of an evidence-based urine culture stewardship program in reducing hospital catheter-associated urinary tract infections (CAUTIs) and the rate of CAUTIs across a 3-hospital system. Patients and Methods: This is a prospective, 2-year quality improvement program conducted from October 1, 2018, to September 30, 2020. An evidence-based urine culture stewardship program was designed, which consisted of the following: criteria for allowing or restricting urine cultures from catheterized patients, a best practice advisory integrated into the ordering system of an electronic medical record, and a systematic provider education and feedback program to ensure compliance. The system-wide rates of CAUTIs (total CAUTIs/catheter days×1000), changes in intercepts, trends, mortality, length of stay, rates of device utilization, and rates of hospital-onset sepsis were compared for 3 years before and 2 years after the launch of the program. Results: Catheter-associated urinary tract infections progressively decreased after the initiation of the program (B=-0.21, P=.001). When the trends before and after the initiation of the program were compared, there were no statistically significant increases in the ratio of actual to predicted hospital length of stay, intensive care unit length of stay, system-wide mortality, and intensive care unit mortality. Although the rates of hospital-acquired sepsis remained consistent after the implementation of the stewardship program through the first quarter of 2020, the rates showed an increase in the second and third quarters of 2020. However, hospital-onset sepsis events associated with the diagnosis of a urinary tract infection did not increase after the intervention. Conclusion: Urine culture stewardship is a safe and effective way to reduce CAUTIs among patients in a large multihospital health care system. Patient safety indicators appeared unchanged after the implementation of the program, and ongoing follow-up will improve confidence in the long-term sustainability of this strategy.
RESUMO
Introduction: The relationship between sex and cardiopulmonary resuscitation (CPR) outcomes remains unclear. Particularly, questions remain regarding the potential contribution of unmeasured confounders. We aimed to examine the differences in the quality of chest compression delivered to men and women. Methods: Prospective study of observational data recorded during consecutive resuscitations occurring in a single tertiary center (Feb-1-2015 to Dec-31-2018) with real-time follow-up to hospital discharge. The studied variables included time in CPR, no-flow-time and fraction, compression rate and depth and release velocity. The primary study endpoint was the unadjusted association between patient sex and the chest compression quality (depth and rate). The secondary endpoint was the association between the various components of chest compression quality, sex, and survival to hospital discharge/neurologically intact survival. Results: Overall 260 in-hospital resuscitations (57.7% male patients) were included. Among these 100 (38.5%) achieved return of spontaneous circulation (ROSC) and 35 (13.5%) survived to hospital discharge. Female patients were significantly older. Ischemic heart disease and ventricular arrhythmias were more prevalent among males. Compression depth was greater in female vs male patients (54.9 ± 11.3 vs 51.7 ± 10.9 mm; p = 0.024). Other CPR quality-metrics were similar. The rates of ROSC, survival to hospital discharge and neurologically intact survival did not differ between males and females. Univariate analysis revealed no association between sex, quality metrics and outcomes. Discussion: Women received deeper chest compressions during in-hospital CPR. Our findings require corroboration in larger cohorts but nonetheless underscore the need to maintain high-quality CPR in all patients using real-time feedback devices. Future studies should also include data on ventilation rates and volumes which may contribute to survival outcomes.
RESUMO
Background: We reviewed internal data and the current literature to update our enhanced recovery protocol (ERP) for patients undergoing a total breast mastectomy. Following implementation, the protocol was audited by chart review and compliance reminders were sent through email. Objective: Our primary research aim was to examine the protocol compliance following the update. Our secondary aims were to examine the association between the change in protocol and the rates of postoperative nausea and vomiting (PONV) and hematoma formation requiring reoperation. Methods: We retrospectively obtained data extracted from the electronic medical record. To test for a difference in outcomes before versus after implementation of the protocol we used multivariable logistic regression with the primary comparisons excluding a â± âone-month window and secondary comparisons excluding a â± âthree-month window from the date of implementation. Results: Our cohort included 5853 unique patients. Total intravenous anesthesia (TIVA) compliance increased by 17%-52% (P â< â0.001) and the use of intraoperative ketorolac dropped from 44% to nearly no utilization (0.7%; P â< â0.001). The rate of reoperation due to bleeding decreased from 3.6% to 2.6% after implementation with the adjusted decrease being 1.0% (bootstrap 95% CI, 0.11%, 1.9%; P â= â0.053) excluding a â± â1 month window and 1.2% (bootstrap 95% CI, 0.24%, 2.0%; P â= â0.028) excluding a â± â3-month window. The rate of rescue antiemetics dropped by 6.4% (95% CI, 3.9%, 9.0%). Conclusions: We were able to improve compliance for nearly all components of the protocol which translated to a meaningful change in an important patient outcome.
RESUMO
Objectives: To determine the positive predictive value (PPV) of algorithms to identify patients with major (at the ankle or more proximal) lower extremity amputation (LEA) using Department of Veterans Affairs electronic medical records (EMR) and to evaluate whether PPV varies by sex, age, and race. Design: We conducted a validation study comparing EMR determined LEA status to self-reported LEA (criterion standard). Setting: Veterans who receive care at the Department of Veterans Affairs. Participants: We invited a national sample of patients (N=699) with at least 1 procedure or diagnosis code for major LEA to participate. We oversampled women, Black men, and men ≤40 years of age. Interventions: Not applicable. Main Outcome Measure: We calculated PPV estimates and false negative percentages for 7 algorithms using EMR LEA procedure and diagnosis codes relative to self-reported major LEA. Results: A total of 466 veterans self-reported their LEA status (68%). PPVs for the 7 algorithms ranged from 89% to 100%. The algorithm that required a single diagnosis or procedure code had the lowest PPV (89%). The algorithm that required at least 1 procedure code had the highest PPV (100%) but also had the highest proportion of false negatives (66%). Algorithms that required at least 1 procedure code or 2 or more diagnosis codes 1 month to 1 year apart had high PPVs (98%-99%) but varied in terms of false negative percentages. PPV estimates were higher among men than women but did not differ meaningfully by age or race, after accounting for sex. Conclusion: PPVs were higher if 1 procedure or at least 2 diagnosis codes were required; the difference between algorithms was marked by sex. Investigators should consider trade-offs between PPV and false negatives to identify patients with LEA using EMRs.
RESUMO
Aim: In-Hospital Cardiac Arrest (IHCA) is a significant burden on healthcare worldwide. Outcomes of IHCA are worse in developing countries compared with developed ones. We aimed to study the epidemiology and factors determining outcomes in adult IHCA in a high income developing country. Methods: We abstracted prospectively collected data of adult patients admitted to our institution over a three-year period who suffered a cardiac arrest. We analysed patient demographics, arrest characteristics, including response time, initial rhythm and code duration. Pre-arrest vital signs, primary diagnoses, discharge and functional status, were obtained from the patients' electronic medical records. Results: A total of 447 patients were studied. The IHCA rate was 8.6/1000 hospital admissions. Forty percent (40%) achieved ROSC with an overall survival to discharge rate of 10.8%, of which 59% had a good functional outcome, with a cerebral performance category score of 1 or 2. Fifty-four percent (54%) of patients had IHCA attributed to causes other than cardiac or respiratory. Admission Glasgow Coma Scale (GCS), shockable rhythm and short code duration were significantly associated with survival (p < 0.001). Conclusion: A combination of patient and system-related factors, such as the underlying cause of cardiac arrest and a lack of DNAR policy, may explain the reduced survival rate in our setting compared with developed countries.
RESUMO
BACKGROUND: Inpatient glucose management can be challenging due to evolving factors that influence a patient's blood glucose (BG) throughout hospital admission. The purpose of our study was to predict the category of a patient's next BG measurement based on electronic medical record (EMR) data. METHODS: EMR data from 184,361 admissions containing 4,538,418 BG measurements from five hospitals in the Johns Hopkins Health System were collected from patients who were discharged between January 1, 2015 and May 31, 2019. Index BGs used for prediction included the 5th to penultimate BG measurements (N = 2,740,539). The outcome was category of next BG measurement: hypoglycemic (BG ≤ 70 mg/dl), controlled (BG 71-180 mg/dl), or hyperglycemic (BG > 180 mg/dl). A random forest algorithm that included a broad range of clinical covariates predicted the outcome and was validated internally and externally. FINDINGS: In our internal validation test set, 72·8%, 25·7%, and 1·5% of BG measurements occurring after the index BG were controlled, hyperglycemic, and hypoglycemic respectively. The sensitivity/specificity for prediction of controlled, hyperglycemic, and hypoglycemic were 0·77/0·81, 0·77/0·89, and 0·73/0·91, respectively. On external validation in four hospitals, the ranges of sensitivity/specificity for prediction of controlled, hyperglycemic, and hypoglycemic were 0·64-0·70/0·80-0·87, 0·75-0·80/0·82-0·84, and 0·76-0·78/0·87-0·90, respectively. INTERPRETATION: A machine learning algorithm using EMR data can accurately predict the category of a hospitalized patient's next BG measurement. Further studies should determine the effectiveness of integration of this model into the EMR in reducing rates of hypoglycemia and hyperglycemia.
RESUMO
BACKGROUND: Obesity, cancer and diabetes frequently coexist. The association of glycaemic variability (GV) and obesity with cancer events had not been explored in diabetes. METHODS: In the prospective Hong Kong Diabetes Register cohort (1995-2019), we used cox proportional hazards models to examine the risk associations of GV with all-site cancer (primary outcome) and cause-specific death (secondary outcome). We also explored the joint association of obesity and GV with these outcomes and site-specific cancer. We expressed GV using HbA1c variability score (HVS) defined as percentage of HbA1c values varying by 0.5% compared with values in preceding visit. FINDINGS: We included 15,286 patients (type 2 diabetes: n=15,054, type 1 diabetes: n=232) with ≥10 years of diabetes and ≥3 years of observation (51.7% men, age (mean±SD): 61.04±10.73 years, HbA1c: 7.54±1.63%, body mass index [BMI]: 25.65±3.92 kg/m2, all-site cancer events: n=928, cancer death events: n=404). There were non-linear relationships between HVS and outcomes but there was linearity within the high and low HVS groups stratified by the median (IQR) value of HVS (42.31 [27.27, 56.28]). In the high HVS group, the adjusted hazard ratios (aHR) of each SD of HVS was 1.15 (95% CI: 1.04, 1.26) for all-site cancer (n=874). The respective aHRs for breast (n=77), liver (n=117) and colorectal (n=184) cancer were 1.44 (1.07, 1.94), 1.37 (1.08, 1.74), and 1.09 (0.90, 1.32). In the high GV group, the respective aHRs were 1.21 (1.06, 1.39), 1.27 (1.15, 1.40), and 1.15 (1.09, 1.22) for cancer, vascular, and noncancer nonvascular death. When stratified by obesity (BMI ≥25 kg/m2), the high HVS & obese group had the highest aHRs of 1.42 (1.16, 1.73), 2.44 (1.24, 4.82), and 2.63 (1.45, 4.74) respectively for all-site, breast, and liver cancer versus the low GV & non-obese group. The respective aHRs were 1.45 (1.07, 1.96), 1.47 (1.12, 1.93), and 1.35 (1.16, 1.57) for cancer, vascular, and noncancer nonvascular death. INTERPRETATION: Obesity and high GV were associated with increased risk of all-site, breast, liver cancer, and cancer-specific death in T2D. FUNDING: The Chinese University of Hong Kong Diabetes Research Fund.
RESUMO
OBJECTIVE: To provide a comprehensive description of stroke characteristics, risk factors, laboratory parameters, and treatment in a series of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-infected patients admitted to Mayo Clinic hospitals in Rochester, Minnesota; Jacksonville, Florida; and Phoenix, Arizona, as well as the Mayo Clinic Health System. PATIENTS AND METHODS: We retrospectively identified hospitalized patients in whom stroke and SARS-CoV-2 infection were diagnosed within the same 3-month interval between September 8, 2019, and December 31, 2020. and extracted data on all available variables of interest. We further incorporated our findings into the existing body of basic science research to present a schematic model illustrating the proposed pathogenesis of ischemic stroke in SARS-CoV-2-infected patients. RESULTS: We identified 30 cases during the study period, yielding a 0.5% stroke rate across 6381 SARS-CoV-2-infected hospitalized patients. Strokes were ischemic in 26 of 30 individuals and hemorrhagic in 4 of 30. Traditional risk factors were common including hypertension (24 of 30), hyperlipidemia (18 of 30), smoking history (13 of 30), diabetes (11 of 30), and atrial fibrillation (8 of 30). The most common ischemic stroke mechanisms were cardioembolism (9 of 26) and cryptogenic (9 of 26). Intravenous alteplase and mechanical thrombectomy were administered to 2 of 26 and 1 of 26, respectively. The median (interquartile range) serum C-reactive protein, interleukin-6, D-dimer, fibrinogen, and ferritin levels were 66 (21-210) mg/L, 116 (8-400) pg/mL, 1267 (556-4510) ng/mL, 711 (263-772) mg/dL, and 407 (170-757) mcg/L, respectively, which were elevated in individuals with available results. CONCLUSION: The high prevalence of vascular risk factors and concurrent elevation of proinflammatory and procoagulation biomarkers suggest that there is an interplay between both factors in the pathogenesis of stroke in SARS-CoV-2-infected patients.
RESUMO
Background: There is growing recognition of the risk of cardiovascular (CV) events, particularly myocarditis, in the context of immune checkpoint inhibitor (ICI) therapy; however, true event rates in real-world populations and in the background of CV disease remain uncertain. Objectives: The authors sought to determine CV event occurrence in ICI-treated patients and assess the accuracy of diagnosis by International Classification of Diseases (ICD) code compared with adjudication using established definitions and full-source documentation review. Methods: Electronic medical record extraction identified potential CV events in ICI-treated patients in the University of Colorado Health system. Two cardiologists independently adjudicated events using standardized definitions. Agreement between ICD codes and adjudicated diagnoses was assessed using the kappa statistic. Results: The cohort comprised 1,813 ICI-treated patients with a mean follow-up of 4.6 ± 3.4 years (3.2 ± 3.2 years pre-ICI and 1.4 ± 1.4 years post-ICI). Venous thromboembolic events (VTEs) were the most common event, occurring in 11.4% of patients pre-ICI and 11.3% post-ICI therapy. Post-ICI therapy, the crude rates of myocardial infarction (MI), heart failure, and stroke were 3.0%, 2.8%, and 1.6%, respectively. Six patients (0.3%) developed myocarditis post-ICI. Agreement between the ICD code and adjudication was greater for VTE (κ = 0.82; 95% CI: 0.79-0.85) and MI (κ = 0.74; 95% CI: 0.66-0.82) and worse for myocarditis (κ = 0.50; 95% CI: 0.20-0.80) and heart failure (κ = 0.47; 95% CI: 0.40-0.54). Conclusions: ICD codes correlated well with adjudicated events for VTE and MI, but correlation was worse for heart failure and myocarditis. Adjudication with standardized definitions can enhance the understanding of the incidence of CV events related to ICI therapy.
RESUMO
Background: Clinical Decision Support Systems (CDSS) embedded into electronic medical records is a best practices approach. However, information is needed on how to incorporate a CDSS to facilitate parental tobacco cessation counseling and reduce child tobacco smoke exposure (TSE) in Pediatric Emergency Department (PED) and Urgent Care (UC) settings. The objective was to explore the barriers and enablers of CDSS use to facilitate child TSE screening and parental tobacco cessation counseling by PED/UC nurses and physicians. Methods: We conducted 29 semi-structured, focused interviews with nurses (n = 17) and physicians (n = 12) at a children's hospital PED/UC. The interview guide included a brief presentation about the design and components of a prior CDSS tobacco intervention. Participants were asked their opinions about CDSS components and recommendations for adapting and implementing the CDSS tobacco intervention in the PED/UC setting. A thematic framework analysis method was used to code and analyze qualitative data. Results: Participant mean (± SD) age was 42 (± 10.1) years; the majority were female (82.8%), non-Hispanic white (93.1%), and never tobacco users (86.2%); all were never electronic cigarette users. Four themes emerged: (1) explore optimal timing to complete CDSS screening and counseling during visits; (2) CDSS additional information and feedback needs; (3) perceived enablers to CDSS use, such as the systematic approach; and (4) perceived barriers to CDSS use, such as lack of time and staff. Conclusions: The CDSS intervention for child TSE screening and parental tobacco cessation during PED/UC visits received endorsements and suggestions for optimal implementation from nurses and physicians.
RESUMO
BACKGROUND/OBJECTIVE: Our objective was to assess the impact of mass mailing and the inclusion of Best Practice Advisory (BPA) "Pop-Up" tool in the electronic medical record (EMR) on HCV screening rates. METHODS: Between June 2015 and March 2020, two interventions were developed for primary care physicians (PCP). An educational letter along with a blood requisition form, signed on behalf of the PCPs, was sent to patients. We also developed a BPA "Pop-Up" screening tool to alert PCPs to order HCV screening tests on patients with no previous screening. Data were collected and analyzed prospectively. RESULTS: When we started the screening program in June 2015, 33,736 baby boomers were eligible for screening, and the hospital system added an additional 26,027 baby boomers between June 2015 and March 2020. Of the 89 primary care providers employed by the hospital, 75 agreed to participate at different time periods. We screened 23,291 (43.5%) of 53,526 eligible patients during study period. Of these, 399 (1.7%) had HCV antibody, but HCV RNA was positive in only 195 (1%). HCV antibody positivity rates were higher in men, blacks, and in 1951-1960 birth cohorts. Spontaneous clearance rates appeared to be lower in men (OR 0.59, 95% CI 0.39-0.90, P = 0.015) and in blacks (OR 0.31, 95% CI 0.20-0.50, P < 0.001). CONCLUSION: Although a formal screening program increased screening rates for HCV among baby boomers, about 50% of baby boomers remained unscreened. In this community screening program, we found that men and blacks are less likely to have spontaneous HCV clearance.
RESUMO
This review examines how a highly structured data collection system could be used to create data-driven diagnostic classification algorithms. Some preliminary data using this process is provided. The data collection system described is applicable to any clinical domain where the diagnoses being explored are based predominately on clinical history (subjective) and physical examination (objective) information. The system has been piloted and refined using patient encounters collected in a clinic specializing in Orofacial Pain treatment. In summary, whether you believe a branching hybrid check-box based data collection system with built-in algorithms is needed, depends on your individual agenda. If you have no plans for data analysis or publishing about the various phenotypes discovered and you do not need pop-up suggestions for best diagnosis and treatment options, it is easier to use a semi-structured narrative note for your patient encounters. If, however, you want data-driven diagnostic and disease risk algorithms and pop-up best-treatment options, then you need a highly structured data collection system that is compatible with machine learning analysis. Automating the journey from data collection to diagnoses has the potential to improve standards of care by providing faster and reliable predictions.
RESUMO
BACKGROUND: "Interpersonal and Communication Skills" (ICS) is a core competency set forth by the ACGME. No structured curriculum exists to train orthopedics residents in ICS. METHODS: Twenty-four out of thirty-five orthopedics residents completed the survey (69%). The survey had the following domains: [1] Demographics, [2] Communication Needs/Goals, and [3] Communication Barriers. RESULTS: Eighty-three percent of respondents wanted to improve their communication skills and their patient's experience. Interns-PGY4s wanted to improve on similar specific communication skills. All residents desired training in conflict management. CONCLUSION: There is a need among orthopedics residents for a communication skills curriculum early in residency training, specifically in conflict management.
RESUMO
Immune checkpoint inhibitors (ICIs) are increasingly used in the treatment of cancer. Immune checkpoint inhibitors may cause a wide-range of autoimmune toxicities referred to as immune-related adverse events (irAEs). There is a paucity of data regarding the presentations and outcomes of patients receiving ICIs who seek care in an emergency department (ED). We performed a retrospective review of patients receiving an ICI who presented to a tertiary care ED between May 1, 2017, and April 30, 2018. Data including ED chief complaint, diagnosis, treatment, and disposition were collected along with baseline characteristics and diagnosis at the time of outpatient oncology follow-up. We report descriptive statistics summarizing the characteristics of the cohort. There were 98 ED visits identified among 67 unique patients. Immune-related adverse events were diagnosed in 16 (16.3%) cases. The most common chief complaints within the irAE group were gastrointestinal symptoms 10 (62.5%). Among the 16 confirmed irAE cases, the most common irAE diagnosed was colitis 9 (56.3%). Two (12.5%) patients with irAEs received corticosteroids during their stay in the ED, and 10 (62.5%) patients with irAEs required hospital admission. Emergency medicine providers documented consideration of an irAE in the differential diagnosis in 14.3% of all ED visits and in 43.8% of visits in which an irAE was ultimately diagnosed. Emergency providers should be familiar with ICIs given their expanding use and potential adverse effects to improve early recognition and patient outcomes in ED settings.
RESUMO
BACKGROUND: Cardiac surgery for radiation-induced valvular disease is associated with adverse outcomes. Transcatheter aortic valve replacement (TAVR) is increasingly used in patients with a history of chest-directed radiation therapy and aortic stenosis (CRT-AS). OBJECTIVES: We examined outcomes of TAVR compared with surgical aortic valve replacement (SAVR) for patients with CRT-AS. METHODS: We identified 69 patients with CRT-AS who underwent TAVR from January 2012 to September 2018. Operative mortality, postoperative morbidities, and length of hospitalization were compared with 117 contemporaneous patients with CRT-AS who underwent isolated SAVR. Age-adjusted survival was evaluated by means of Cox proportional hazards modeling. RESULTS: Compared with SAVR patients, TAVR patients were older (mean age 75 ± 11.5 vs 65 ± 11.5 years), with more comorbidities, such as chronic obstructive pulmonary disease, atrial fibrillation, and peripheral vascular disease (all P < 0.050). Operative mortality was 4.3% for SAVR vs 1.4% for TAVR (P = 0.41). Most SAVR deaths (4 of 5) occurred in the intermediate-/high-risk group (Society for Thoracic Surgeons predicted risk of operative mortality >3%; P = 0.026). The ratio of observed to expected mortality was better for low-risk SAVR patients and all TAVR patients (0.72 [95% confidence interval [CI]: 0.59-0.86] and 0.24 [95% CI: 0.05-0.51], respectively) compared with intermediate-/high-risk SAVR patients (2.52 [95% CI: 0.26-4.13]). SAVR patients had significantly longer median intensive care unit and overall length of stay and higher blood transfusion requirements but similar rates of stroke and pacemaker implantation. CONCLUSIONS: TAVR was associated with excellent in-hospital outcomes and better survival compared with intermediate-/high-risk SAVR in patients with CRT-AS. While SAVR still has a role in low-risk patients or those for whom TAVR is unsuitable for technical or anatomical reasons, TAVR is emerging as the standard of care for intermediate-/high-risk CRT-AS patients.