Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 8.601
Filter
1.
J Stud Alcohol Drugs ; 81(2): 144-151, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32359043

ABSTRACT

OBJECTIVE: Alcohol use disorders (AUDs) are associated with high social and health care costs. We compare the direct social and health care costs of patients with AUDs, according to four service use profiles: (a) AUD treatment, (b) mental health (MH) treatment, (c) AUD + MH treatment, (d) no treatment. A separate analysis of the costliest 10% is included. Furthermore, the association between the service user profile and the risk of death is examined. METHOD: Direct unit service costs were retrieved from the electronic health record system and supplemented with patient grouping-based costs for primary and secondary care services, to examine the yearly mean cost per patient in the AUD cohort (N = 5,136; 71.1% male). We used data collected in the North Karelia region of Finland between 2014 and 2018. RESULTS: Total costs of care for the cohort during the 5-year follow-up were 126 million Euros, and the percentage of the costliest 10% (n = 521) was 51.7% (65 million Euros). Total costs were 12,778 Euros lower if the person received AUD treatment only, compared with those not in treatment. For those receiving MH treatment only, the total costs were 1,819 Euros higher, and costs were 1,523 Euros higher for those receiving AUD + MH treatment. Receiving any treatment was associated with a diminished risk of death (AUD: odds ratio [OR] = 0.56; MH: OR = 0.63; AUD + MH: OR = 0.41). CONCLUSIONS: Receiving only AUD treatment was associated with the lowest cost of care. Our results support the early identification of AUDs and provision of treatment in specialized addiction services to lower the costs of care and improve care outcomes.

2.
J Hand Ther ; 2020 Apr 29.
Article in English | MEDLINE | ID: mdl-32360062

ABSTRACT

STUDY DESIGN: Retrospective cost-of-illness study. INTRODUCTION: Injuries to the hand and wrist are common. Most uncomplicated and stable upper extremity injuries recover with conservative management; however, some require surgical intervention. The economic burden on the health care system from such injuries can be considerable. PURPOSE OF THE STUDY: To estimate the economic implications of surgically managed acute hand and wrist injuries at one urban health care network. METHODS: Using 33 primary diagnosis ICD-10 codes involving the hand and wrist, 453 consecutive patients from 2014 to 2015 electronic billing records who attended the study setting emergency department and received consequent surgical intervention and outpatient follow-up were identified. Electronic medical records were reviewed to extract demographic data. Costs were calculated from resource use in the emergency department, inpatient, and outpatient settings. Results are presented by demographics, injury type, mechanism of injury, and patient pathway. RESULTS: Two hundred and twenty-six individuals (n 1/4 264 surgeries) were included. The total cost of all injuries was $1,204,606. The median cost per injury for non-compensable cases (n = 191) was $4508 [IQR $3993-$6172] and $5057 [IQR $3957-$6730] for compensable cases (n = 35). The median number of postoperative appointments with a surgeon was 2.00 (IQR 1.00-3.00) for both compensable and non-compensable cases. The number of hand therapy appointments for non-compensable cases and compensable cases was 4 [IQR 2-6] and 2 [IQR 1-3], respectively. DISCUSSION: Findings of this investigation highlight opportunities for health promotion strategies for reducing avoidable injuries and present considerations for reducing cost burden by addressing high fail to attend (FTA) appointment rates. CONCLUSION: Surgically managed hand and wrist injuries contribute to a significant financial burden on the health care system. Further research using stringent data collection methods are required to establish epidemiological data and national estimates of cost burden.

3.
BMC Health Serv Res ; 20(1): 370, 2020 May 01.
Article in English | MEDLINE | ID: mdl-32357891

ABSTRACT

BACKGROUND: The 2013 Diabetes Canada guidelines recommended routinely using vascular protective medications for most patients with diabetes. These medications included statins and angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs). Antiplatelet agents were only recommended for secondary prevention of cardiovascular disease. Using Electronic Medical Record (EMR) data, we previously found that guideline dissemination efforts were not associated with an increase in the rate of primary care prescriptions of these medications. However, this needs confirmation: patients can receive prescriptions from different sources including specialists and they may not always fill these prescriptions. Using both EMR and administrative health data, we examined whether guideline dissemination impacted the dispensing of vascular protective medications to patients. METHODS: The study population included patients with diabetes aged 66 or over in Ontario, Canada. We created two cohorts using two different approaches: an Electronic Medical Record (EMR) algorithm for diabetes using linked EMR-administrative data and an administrative algorithm using population level administrative data. We examined data from January 2010 to December 2016. Patients with diabetes were deemed to be likely taking a medication (or covered) during a quarter if the daily amount for a dispensed medication would last for at least 75% of days in any given quarter. An interrupted time series analysis was used to assess the proportion of patients covered by each medication class. Proton pump inhibitors (PPIs) were used as a reference. RESULTS: There was no increase in the rate of change for medication coverage following guideline release in either the EMR or the administrative diabetes cohorts. For statins, the change in trend was - 0.03, p = 0.7 (EMR) and - 0.12, p = 0.04(administrative). For ACEI/ARBs, this was 0.03, p = 0.6 (EMR) and 0, p = 1(administrative). For antiplatelets, this was 0.001, P = .97 (EMR) and - 0.03, p = 0.03 (administrative). The comparator PPI was - 0.07, p = 0.4 (EMR) and - 0.11, p = 0.002 (administrative). CONCLUSIONS: Using both EMR and administrative health data, we confirmed that the Diabetes Canada 2013 guideline dissemination strategy did not lead to an increased rate of coverage for vascular protective medications. Alternative strategies are needed to effect change in practice.

4.
Arthroscopy ; 36(5): 1215-1217, 2020 May.
Article in English | MEDLINE | ID: mdl-32370882

ABSTRACT

Legacy patient-reported outcome mea`sures lack standardization, resulting in difficulty comparing the results of diverse clinical outcome studies: "You can't compare apples to oranges." To address this concern, the National Institutes of Health initiated the Patient-Reported Outcomes Measurement Information System (PROMIS) to assess common dimensions of a wide range of diseases. PROMIS uses computer adaptive testing: A fluid questionnaire chooses subsequent questions based on the responses to previous questions to efficiently characterize outcomes using only 4 to 6 questions. This greatly reduces survey fatigue. Research correlating PROMIS to legacy measures is of value. For now, some questions may require more information than PROMIS can provide, in which case legacy measures could be preferred. In the future, developing and adding a utility score to PROMIS could assess "value" and allow decision analyses and cost-effectiveness analyses for diverse health interventions. In the end, PROMIS may allow us to compare apples to oranges.

5.
Article in English | MEDLINE | ID: mdl-32371073

ABSTRACT

PURPOSE: The NCI Common Terminology Criteria for Adverse Events (CTCAE) v5.0 is the standard for oncology toxicity encoding and grading despite limited validation. We assessed inter-rater reliability (IRR) in multi-reviewer toxicity identification. METHODS AND MATERIALS: Two reviewers independently reviewed 100 randomly selected notes for weekly on-treatment visits during radiotherapy from the electronic health record (EHR). Discrepancies were adjudicated by a third reviewer for consensus. Term harmonization was performed to account for overlapping symptoms in CTCAE. IRR was assessed based on unweighted and weighted Cohen's kappa coefficients. RESULTS: Between reviewers, unweighted kappa was 0.68 (95% CI 0.65-0.71) and weighted kappa 0.59 (0.22-1.00). IRR was consistent between noted present or absent symptoms with kappa of 0.6 (0.66-0.71) and 0.6 (0.65-0.69), respectively. CONCLUSIONS: Significant discordance suggests toxicity identification, particularly retrospectively, is a complex and error prone task. Strategies to minimize IRR, including training and simplification of the CTCAE criteria, should be considered in trial design and future terminologies.

6.
Am J Crit Care ; 29(3): 204-213, 2020 May 01.
Article in English | MEDLINE | ID: mdl-32355967

ABSTRACT

BACKGROUND: Critically ill patients have a variety of unique risk factors for pressure injury. Identification of these risk factors is essential to prevent pressure injury in this population. OBJECTIVE: To identify factors predicting the development of pressure injury in critical care patients using a large data set from the PhysioNet MIMIC-III (Medical Information Mart for Intensive Care) clinical database. METHODS: Data for 1460 patients were extracted from the database. Variables that were significant in bivariate analyses were used in a final logistic regression model. A final set of significant variables from the logistic regression was used to develop a decision tree model. RESULTS: In regression analysis, cardiovascular disease, peripheral vascular disease, pneumonia or influenza, cardiovascular surgery, hemodialysis, norepinephrine administration, hypotension, septic shock, moderate to severe malnutrition, sex, age, and Braden Scale score on admission to the intensive care unit were all predictive of pressure injury. Decision tree analysis revealed that patients who received norepinephrine, were older than 65 years, had a length of stay of 10 days or less, and had a Braden Scale score of 15 or less had a 63.6% risk of pressure injury. CONCLUSION: Determining pressure injury risk in critically ill patients is complex and challenging. One common pathophysiological factor is impaired tissue oxygenation and perfusion, which may be nonmodifiable. Improved risk quantification is needed and may be realized in the near future by leveraging the clinical information available in the electronic medical record through the power of predictive analytics.

7.
J Med Screen ; : 969141320919152, 2020 May 01.
Article in English | MEDLINE | ID: mdl-32356670

ABSTRACT

OBJECTIVES: To assess time trends in colorectal cancer screening uptake, time-to-colonoscopy completion following a positive fecal occult blood test and associated patient factors, and the extent and predictors of longitudinal screening adherence in Israel. SETTING: Nation-wide population-based study using data collected from four health maintenance organizations for the Quality Indicators in Community Healthcare Program. METHODS: Screening uptake for the eligible population (aged 50-74) was recorded 2003-2018 using aggregate data. For a subcohort (2008-2012, N = 1,342,617), time-to-colonoscopy following a positive fecal occult blood test and longitudinal adherence to screening guidelines were measured using individual-level data, and associated factors assessed in multivariate models. RESULTS: The annual proportion screened rose for both sexes from 11 to 65%, increasing five-fold for age group 60-74 and >six-fold for 50-59 year olds, respectively. From 2008 to 2012, 67,314 adults had a positive fecal occult blood test, of whom 71% eventually performed a colonoscopy after a median interval of 122 (95% confidence interval 110.2-113.7) days. Factors associated with time-to-colonoscopy included age, socioeconomic status, and comorbidities. Only 25.5% of the population demonstrated full longitudinal screening adherence, mainly attributable to colonoscopy in the past 10 years rather than annual fecal occult blood test performance (83% versus 17%, respectively). Smoking, diabetes, lower socioeconomic status, cardiovascular disease, and hypertension were associated with decreased adherence. Performance of other cancer screening tests and frequent primary care visits were strongly associated with adherence. CONCLUSIONS: Despite substantial improvement in colorectal cancer screening uptake on a population level, individual-level data uncovered gaps in colonoscopy completion after a positive fecal occult blood test and in longitudinal adherence to screening, which should be addressed using focused interventions.

8.
Clin Infect Dis ; 2020 May 05.
Article in English | MEDLINE | ID: mdl-32367125

ABSTRACT

BACKGROUND: Whole genome sequencing (WGS) surveillance and electronic health record data mining have the potential to greatly enhance the identification and control of hospital outbreaks. The objective was to develop methods for examining economic value of a WGS surveillance-based infection prevention (IP) program compared to Standard of Care (SoC). METHODS: The economic value of a WGS surveillance-based IP program was assessed from a hospital's perspective using historical outbreaks from 2011-2016. We used transmission network of outbreaks to estimate incremental cost per transmission averted. The number of transmissions averted depended on the effectiveness of intervening against transmission routes, time from transmission to positive culture results and time taken to obtain WGS results and intervene on the transmission route identified. The total cost of an IP program included cost of staffing, WGS, and treating infections. RESULTS: Approximately 41 out of 89 (46%) transmissions could have been averted under the WGS surveillance-based IP program and it was found to be a less costly and more effective strategy than SoC. The results were most sensitive to the cost of performing WGS and the number of isolates sequenced per year under WGS surveillance. The probability of the WGS surveillance-based IP program being cost-effective was 80% if willingness to pay exceeded $2,400 per transmission averted. CONCLUSIONS: The proposed economic analysis is a useful tool to examine economic value of a WGS surveillance-based IP program. These methods will be applied to a prospective evaluation of WGS surveillance compared to SoC.

9.
Surgery ; 2020 Apr 26.
Article in English | MEDLINE | ID: mdl-32349869

ABSTRACT

BACKGROUND: Automated data extraction from the electronic medical record is fast, scalable, and inexpensive compared with manual abstraction. However, concerns regarding data quality and control for underlying patient variation when performing retrospective analyses exist. This study assesses the ability of summary electronic medical record metrics to control for patient-level variation in cost outcomes in pancreaticoduodenectomy. METHODS: Patients that underwent pancreaticoduodenectomy from 2014 to 2018 at a single institution were identified within the electronic medical record and linked with the National Surgical Quality Improvement Program. Variables in both data sets were compared using interrater reliability. Logistic and linear regression modelling of complications and costs were performed using combinations of comorbidities/summary metrics. Models were compared using the adjusted R2 and Akaike information criterion. RESULTS: A total of 117 patients populated the final data set. A total of 31 (26.5%) patients experienced a complication identified by the National Surgical Quality Improvement Program. The median direct variable cost for the encounter was US$14,314. Agreement between variables present in the electronic medical record and the National Surgical Quality Improvement Program was excellent. Stepwise linear regression models of costs, using only electronic medical record-extractable variables, were non-inferior to those created with manually abstracted individual comorbidities (R2 = 0.67 vs 0.30, Akaike information criterion 2,095 vs 2,216). Model performance statistics were minimally impacted by the addition of comorbidities to models containing electronic medical record summary metrics (R2 = 0.67 vs 0.70, Akaike information criterion 2,095 vs 2,088). CONCLUSION: Summary electronic medical record perioperative risk metrics predict patient-level cost variation as effectively as individual comorbidities in the pancreaticoduodenectomy population. Automated electronic medical record data extraction can expand the patient population available for retrospective analysis without the associated increase in human and fiscal resources that manual data abstraction requires.

10.
Genet Med ; 2020 Apr 30.
Article in English | MEDLINE | ID: mdl-32350418

ABSTRACT

PURPOSE: Cancer genetics clinics have seen increasing demand, challenging genetic counselors (GCs) to increase efficiency and prompting some clinics to implement genetic counseling assistants (GCAs). To evaluate the impact of GCAs on Geisinger's cancer genetics clinic, we tracked GC time utilization, new patient volume, and clinic cost per patient before and after implementing a GCA program. METHODS: GCs used time-tracking software while completing preappointment activities. Electronic health records were reviewed for appointment length and number of patients per week. Internal salary data for GCs and GCAs were used to calculate clinic costs per patient. RESULTS: Time spent by GCs completing each preappointment activity (21.8 vs. 15.1 minutes) and appointment length (51.6 vs. 44.5 minutes) significantly decreased after GCA program implementation (p values < 0.001). New patients per week per GC significantly increased (7.9 vs. 11.4, p < 0.001). Weekly clinic cost per patient significantly decreased ($233 vs. $176, p = 0.03). CONCLUSION: Implementing a GCA program increased GC efficiency in preappointment activities and clinic appointments, increased patient volume, and decreased clinic cost per patient. Such a program can improve access to GC services and assist GCs in focusing on the direct patient care for which they are specially trained.

11.
PLoS One ; 15(5): e0232335, 2020.
Article in English | MEDLINE | ID: mdl-32379778

ABSTRACT

OBJECTIVES: Although the American Academy of Pediatrics recommends screening for autism spectrum disorder (ASD) for all young children, disparities in ASD diagnosis and intervention in minority children persist. One potential contributor to disparities could be whether physicians take different actions after an initial positive screen based on patient demographics. This study estimated factors associated with physicians completing the follow-up interview for the Modified Checklist for Autism in Toddlers with Follow-up (M-CHAT-F), and referring children to diagnostic services, audiology, and Early Intervention (EI) immediately after a positive screen. METHODS: Children seen in a large primary care network that has implemented universal ASD screening were included if they screened positive on the M-CHAT parent questionnaire during a 16-30 month well child visit (N = 2882). Demographics, screening results, and referrals were extracted from the electronic health record. RESULTS: Children from lower-income families or on public insurance were more likely to have been administered the follow-up interview. Among children who screened positive, 26% were already in EI, 31% were newly referred to EI, 11% were referred each to audiology and for comprehensive ASD evaluation. 40.2% received at least one recommended referral; 3.7% received all recommended referrals. In adjusted multivariable models, male sex, white versus black race, living in an English-speaking household, and having public insurance were associated with new EI referral. Male sex, black versus white race, and lower household income were associated with referral to audiology. Being from an English-speaking family, white versus Asian race, and lower household income were associated with referral for ASD evaluation. A concurrent positive screen for general developmental concerns was associated with each referral. CONCLUSIONS: We found low rates of follow-up interview completion and referral after positive ASD screen, with variations in referral by sex, language, socio-economic status, and race. Understanding pediatrician decision-making about ASD screening is critical to improving care and reducing disparities.

12.
J Natl Cancer Inst ; 2020 Apr 25.
Article in English | MEDLINE | ID: mdl-32333765

ABSTRACT

Development of personalized, stratified follow-up care pathways where care intensity and setting vary with needs could improve cancer survivor outcomes and efficiency of healthcare delivery. Advancing such an approach in the United States requires identification and prioritization of the most pressing research and data needed to create and implement personalized care pathway models. Cancer survivorship research and care experts (N = 39) participated in an in-person workshop on this topic in 2018. Using a modified Delphi technique, a structured, validated system for identifying consensus, an expert panel identified critical research questions related to operationalizing personalized, stratified follow-up care pathways for individuals diagnosed with cancer. Consensus for the top priority research questions was achieved iteratively through three rounds: item generation, item consolidation, and selection of the final list of priority research questions. From the 28 research questions that were generated, 11 research priority questions were identified. The questions were categorized into 4 priority themes: Determining outcome measures for new care pathways; Developing and evaluating new care pathways; Incentivizing new care pathway delivery; and Technology and infrastructure to support self-management. Existing data sources to begin answering questions were also identified. While existing data sources, including cancer registry, electronic medical record and health insurance claims data, can be enhanced to begin addressing some questions, additional research resources are needed to address these priority questions.

13.
Schizophr Bull ; 2020 Apr 18.
Article in English | MEDLINE | ID: mdl-32303767

ABSTRACT

The objective of this study is to describe the 2-year real-world clinical outcomes after transition to psychosis in patients at clinical high-risk. The study used the clinical electronic health record cohort study including all patients receiving a first index primary diagnosis of nonorganic International Classification of Diseases (ICD)-10 psychotic disorder within the early psychosis pathway in the South London and Maudsley (SLaM) National Health Service (NHS) Trust from 2001 to 2017. Outcomes encompassed: cumulative probability (at 3, 6, 12, and 24 months) of receiving a first (1) treatment with antipsychotic, (2) informal admission, (3) compulsory admission, and (4) treatment with clozapine and (5) numbers of days spent in hospital (at 12 and 24 months) in patients transitioning to psychosis from clinical high-risk services (Outreach and Support in south London; OASIS) compared to other first-episode groups. Analyses included logistic and 0-inflated negative binomial regressions. In the study, 1561 patients were included; those who had initially been managed by OASIS and had subsequently transitioned to a first episode of psychosis (n = 130) were more likely to receive antipsychotic medication (at 3, 6, and 24 months; all P < .023), to be admitted informally (at all timepoints, all P < .004) and on a compulsory basis (at all timepoints, all P < .013), and to have spent more time in hospital (all timepoints, all P < .007) than first-episode patients who were already psychotic when seen by the OASIS service (n = 310), or presented to early intervention services (n = 1121). The likelihood of receiving clozapine was similar across all groups (at 12/24 months, all P < .101). Transition to psychosis from a clinical high-risk state is associated with severe real-world clinical outcomes. Prevention of transition to psychosis should remain a core target of future research. The study protocol was registered on www.researchregistry.com; researchregistry5039).

14.
J Med Internet Res ; 22(4): e15554, 2020 Apr 02.
Article in English | MEDLINE | ID: mdl-32238331

ABSTRACT

BACKGROUND: Variations in patient demand increase the challenge of balancing high-quality nursing skill mixes against budgetary constraints. Developing staffing guidelines that allow high-quality care at minimal cost requires first exploring the dynamic changes in nursing workload over the course of a day. OBJECTIVE: Accordingly, this longitudinal study analyzed nursing care supply and demand in 30-minute increments over a period of 3 years. We assessed 5 care factors: patient count (care demand), nurse count (care supply), the patient-to-nurse ratio for each nurse group, extreme supply-demand mismatches, and patient turnover (ie, number of admissions, discharges, and transfers). METHODS: Our retrospective analysis of data from the Inselspital University Hospital Bern, Switzerland included all inpatients and nurses working in their units from January 1, 2015 to December 31, 2017. Two data sources were used. The nurse staffing system (tacs) provided information about nurses and all the care they provided to patients, their working time, and admission, discharge, and transfer dates and times. The medical discharge data included patient demographics, further admission and discharge details, and diagnoses. Based on several identifiers, these two data sources were linked. RESULTS: Our final dataset included more than 58 million data points for 128,484 patients and 4633 nurses across 70 units. Compared with patient turnover, fluctuations in the number of nurses were less pronounced. The differences mainly coincided with shifts (night, morning, evening). While the percentage of shifts with extreme staffing fluctuations ranged from fewer than 3% (mornings) to 30% (evenings and nights), the percentage within "normal" ranges ranged from fewer than 50% to more than 80%. Patient turnover occurred throughout the measurement period but was lowest at night. CONCLUSIONS: Based on measurements of patient-to-nurse ratio and patient turnover at 30-minute intervals, our findings indicate that the patient count, which varies considerably throughout the day, is the key driver of changes in the patient-to-nurse ratio. This demand-side variability challenges the supply-side mandate to provide safe and reliable care. Detecting and describing patterns in variability such as these are key to appropriate staffing planning. This descriptive analysis was a first step towards identifying time-related variables to be considered for a predictive nurse staffing model.

15.
Urology ; 2020 Apr 05.
Article in English | MEDLINE | ID: mdl-32268175

ABSTRACT

OBJECTIVE: To understand if an electronic medical record embedded best practice alert decreased our hospital's Catheter-associated urinary tract infections (CAUTIs) and catheter utilization (CU) rates. METHODS: Data from our inpatient prospective CAUTI database, spanning 2011 to 2016, were utilized for our analysis with the Best Practice Alert (BPA) starting in 2013. Using generalized linear models we compared the CU and CAUTI rates between pre- and post-BPA periods in different patient subpopulations. RESULTS: We identified no decrease in the CU rate and no effect on the CAUTI rates as a result of the BPA. However, there was an increase in CAUTI rates in our adult intensive care unit (ICU) population from 0.2 to 1.8 CAUTIs per 1,000 catheter days (P <.01) despite a significant decrease in CU rate within this population after the BPA (pre-BPA odds ratio [OR] 0.93 vs post-BPA OR 0.89; P <0.01). In contrast, our non-ICU adult population had a decrease in CAUTI rate from 2.8 to 1.7 CAUTIs per 1,000 catheter days (P <.01) despite no significant decrease after the BPA (pre-BPA OR 0.90 vs post-BPA OR 0.95; P <.1). CONCLUSION: CAUTI rates are exceedingly low, with or without the use of a BPA. Such an alert appears to have limited success in lowering CU rates in populations where catheter use is already low and may not always lead to an improvement in CAUTI rates as there appears to be some populations that may be more prone to CAUTI development secondary to possible intrinsic or co-morbid conditions.

16.
Appl Clin Inform ; 11(2): 253-264, 2020 03.
Article in English | MEDLINE | ID: mdl-32268389

ABSTRACT

BACKGROUND: With the consequences of inadequate dosing ranging from increased bleeding risk to excessive drug costs and undesirable administration regimens, the antihemophilic factors are uniquely suited to dose individualization. However, existing options for individualization are limited and exist outside the flow of care. We developed clinical decision support (CDS) software that is integrated with our electronic health record (EHR) and designed to streamline the process for our hematology providers. OBJECTIVES: The aim of this study is to develop and examine the usability of a CDS tool for antihemophilic factor dose individualization. METHODS: Our development strategy was based on the features associated with successful CDS tools and driven by a formal requirements analysis. The back-end code was based on algorithms developed for manual individualization and unit tested with 23,000 simulated patient profiles created from the range of patient-derived pharmacokinetic parameter estimates defined in children and adults. A 296-item heuristic checklist was used to guide design of the front-end user interface. Content experts and end-users were recruited to participate in traditional usability testing under an institutional review board approved protocol. RESULTS: CDS software was developed to systematically walk the point-of-care clinician through dose individualization after seamlessly importing the requisite patient data from the EHR. Classical and population pharmacokinetic approaches were incorporated with clearly displayed estimates of reliability and uncertainty. Users can perform simulations for prophylaxis and acute bleeds by providing two of four therapeutic targets. Testers were highly satisfied with our CDS and quickly became proficient with the tool. CONCLUSION: With early and broad stakeholder engagement, we developed a CDS tool for hematology provider that affords seamless transition from patient assessment, to pharmacokinetic modeling and simulation, and subsequent dose selection.

19.
Arq Gastroenterol ; 57(1): 31-38, 2020.
Article in English | MEDLINE | ID: mdl-32294733

ABSTRACT

BACKGROUND: Over the next 20 years, the number of patients on the waiting list for liver transplantation (LTx) is expected to increase by 23%, while pre-LTx costs should raise by 83%. OBJECTIVE: To evaluate direct medical costs of the pre-LTx period from the perspective of a tertiary care center. METHODS: The study included 104 adult patients wait-listed for deceased donor LTx between October 2012 and May 2016 whose treatment was fully provided at the study transplant center. Clinical and economic data were obtained from electronic medical records and from a hospital management software. Outcomes of interest and costs of patients on the waiting list were compared through the Kruskal-Wallis test. A generalized linear model with logit link function was used for multivariate analysis. P-values <0.05 were considered statistically significant. RESULTS: The costs of patients who underwent LTx ($8,879.83; 95% CI 6,735.24-11,707.27; P<0.001) or who died while waiting ($6,464.73; 95% CI 3,845.75-10,867.28; P=0.04) were higher than those of patients who were excluded from the list for any reason except death ($4,647.78; 95% CI 2,469.35-8,748.04; P=0.254) or those who remained on the waiting list at the end of follow-up. CONCLUSION: Although protocols of inclusion on the waiting list vary among transplant centers, similar approaches exist and common problems should be addressed. The results of this study may help centers with similar socioeconomic realities adjust their transplant policies.


Subject(s)
Health Care Costs/statistics & numerical data , Liver Failure/surgery , Liver Transplantation/economics , Aged , Female , Humans , Liver Transplantation/statistics & numerical data , Male , Middle Aged , Retrospective Studies , Waiting Lists
20.
JCO Oncol Pract ; : JOP1900697, 2020 Apr 08.
Article in English | MEDLINE | ID: mdl-32267798

ABSTRACT

PURPOSE: Guidelines recommend venous thromboembolism (VTE) risk assessment in outpatients with cancer and pharmacologic thromboprophylaxis in selected patients at high risk for VTE. Although validated risk stratification tools are available, < 10% of oncologists use a risk assessment tool, and rates of VTE prophylaxis in high-risk patients are low in practice. We hypothesized that implementation of a systems-based program that uses the electronic health record (EHR) and offers personalized VTE prophylaxis recommendations would increase VTE risk assessment rates in patients initiating outpatient chemotherapy. PATIENTS AND METHODS: Venous Thromboembolism Prevention in the Ambulatory Cancer Clinic (VTEPACC) was a multidisciplinary program implemented by nurses, oncologists, pharmacists, hematologists, advanced practice providers, and quality partners. We prospectively identified high-risk patients using the Khorana and Protecht scores (≥ 3 points) via an EHR-based risk assessment tool. Patients with a predicted high risk of VTE during treatment were offered a hematology consultation to consider VTE prophylaxis. Results of the consultation were communicated to the treating oncologist, and clinical outcomes were tracked. RESULTS: A total of 918 outpatients with cancer initiating cancer-directed therapy were evaluated. VTE monthly education rates increased from < 5% before VTEPACC to 81.6% (standard deviation [SD], 11.9; range, 63.6%-97.7%) during the implementation phase and 94.7% (SD, 4.9; range, 82.1%-100%) for the full 2-year postimplementation phase. In the postimplementation phase, 213 patients (23.2%) were identified as being at high risk for developing a VTE. Referrals to hematology were offered to 151 patients (71%), with 141 patients (93%) being assessed and 93.8% receiving VTE prophylaxis. CONCLUSION: VTEPACC is a successful model for guideline implementation to provide VTE risk assessment and prophylaxis to prevent cancer-associated thrombosis in outpatients. Methods applied can readily translate into practice and overcome the current implementation gaps between guidelines and clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL