Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Pain Med ; 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38518091

ABSTRACT

OBJECTIVE: To determine if patients with chronic migraine continue onabotulinumtoxinA (onabotA) long-term. METHODS: We performed a retrospective cohort analysis using aggregated, de-identified patient data from the Stanford Headache Center. We included patients in California who received at least one prescription for onabotA during the years of 2011-2021. The primary outcome was the number of onabotA treatments each patient received. Secondary outcomes included sex, age, race, ethnicity, body mass index (BMI), distance to the treatment facility, and zip code income quartile. RESULTS: A total of 1,551 patients received a mean of 7.60 ± 7.26 treatments and a median of 5 treatments, with 16.2% of patients receiving only one treatment and 10.6% receiving at least 19. Time-to-event survival analysis suggested 26.0% of patients would complete at least 29 treatments if able. Younger age and female sex were associated with statistically significant differences between quartile groups of number of onabotA treatments (p = 0.007, p = 0.015). BMI, distance to treatment facility, and zip code income quartile were not statistically significantly different between quartile groups (p > 0.500 for all). Prescriptions of both triptans and non-onabotA preventive medications showed a statistically significant increase with each higher quartile of number of onabotA treatments (p < 0.001; p < 0.001). DISCUSSION: We show long-term persistence to onabotA is high and that distance to treatment facility and income are not factors in continuation. Our work also demonstrates that as patients continue onabotA over time, there may be an increased need for adjunctive or alternative treatments.

2.
Headache ; 64(2): 188-194, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37882379

ABSTRACT

OBJECTIVE: To determine the effect of the introduction of the calcitonin gene-related peptide monoclonal antibodies (CGRP mAbs) in 2018 on the prescribing of older medications for the prevention of chronic migraine. BACKGROUND: Prior to 2018, the preventive treatment of migraine borrowed from medications intended to treat other illnesses with the last medication, onabotulinumtoxinA, receiving Food and Drug Administration (FDA) approval for the prevention of chronic migraine in 2010. The FDA approval of three CGRP mAbs in 2018 provided the ideal natural experiment to assess how the introduction of these medications, and a fourth in 2020, affected the generally stable migraine preventive medications market. METHODS: We performed a retrospective cohort analysis using the aggregated de-identified data of 6595 patients. The percentage of patients with chronic migraine who had been prescribed one of ten most prescribed oral preventive medications or onabotulinumtoxinA, or any of the four CGRP mAbs, were calculated relative to the total number of patients with chronic migraine who received a prescription for any medication from our clinic during the pre-CGRP mAb years of 2015-2017 and post-approval years of 2019-2021. RESULTS: We observed a statistically significant decrease in the prescription of the top 10 most prescribed medications after the introduction of the CGRP mAbs overall (1456/3144, 46.3%, to 1995/4629, 43.1%, p = 0.001), as well as with most individual medications, including large decreases in verapamil (230/3144, 7.3%, to 125/4629, 2.7%; p < 0.001), the tricyclic antidepressants (494/3144, 15.7%, to 532/4629, 11.5%; p < 0.001), topiramate (566/3144, 18.0%, to 653/4629, 14.1%; p < 0.001), and onabotulinumtoxinA (861/3144, 27.4%, to 1134/4629, 24.5%; p = 0.001). CONCLUSION: The introduction of the CGRP mAbs during 2018 resulted in a decrease in utilization of most oral medications and onabotulinumtoxinA for the prevention of migraine. Future work should continue to observe how the prescription patterns of these medications evolve with time.


Subject(s)
Botulinum Toxins, Type A , Migraine Disorders , Humans , Calcitonin Gene-Related Peptide , Antibodies, Monoclonal/therapeutic use , Botulinum Toxins, Type A/therapeutic use , Retrospective Studies , Migraine Disorders/drug therapy , Migraine Disorders/prevention & control , Headache/drug therapy
3.
Front Aging ; 4: 1211571, 2023.
Article in English | MEDLINE | ID: mdl-37822457

ABSTRACT

Objectives: To investigate whether exposure history to two common loop diuretics, bumetanide and furosemide, affects the risk of developing Alzheimer's disease (AD) after accounting for socioeconomic status and congestive heart failure. Methods: Individuals exposed to bumetanide or furosemide were identified in the Stanford University electronic health record using the de-identified Observational Medical Outcomes Partnership platform. We matched the AD case cohort to a control cohort (1:20 case:control) on gender, race, ethnicity, and hypertension, and controlled for variables that could potentially be collinear with bumetanide exposure and/or AD diagnosis. Among individuals older than 65 years, 5,839 AD cases and 116,103 matched controls were included. A total of 1,759 patients (54 cases and 1,705 controls) were exposed to bumetanide. Results: After adjusting for socioeconomic status and other confounders, the exposure of bumetanide and furosemide was significantly associated with reduced AD risk (respectively, bumetanide odds ratio [OR] = 0.23; 95% confidence interval [CI], 0.15-0.36; p = 4.0 × 10-11; furosemide OR = 0.42; 95% CI, 0.38-0.47; p < 2.0 × 10-16). Discussion: Our study replicates in an independent sample that a history of bumetanide exposure is associated with reduced AD risk while also highlighting an association of the most common loop diuretic (furosemide) with reduced AD risk. These associations need to be additionally replicated, and the mechanism of action remains to be investigated.

4.
Am J Emerg Med ; 70: 171-174, 2023 08.
Article in English | MEDLINE | ID: mdl-37327683

ABSTRACT

OBJECTIVES: A majority of patients who experience acute coronary syndrome (ACS) initially receive care in the emergency department (ED). Guidelines for care of patients experiencing ACS, specifically ST-segment elevation myocardial infarction (STEMI) are well defined. We examine the utilization of hospital resources between patients with NSTEMI as compared to STEMI and unstable angina (UA). We then make the case that as NSTEMI patients are the majority of ACS cases, there is a great opportunity to risk stratify these patients in the emergency department. MATERIALS AND METHODS: We examined hospital resource utilization measure between those with STEMI, NSTEMI, and UA. These included hospital length of stay (LOS), any intensive care unit (ICU) care time, and in-hospital mortality. RESULTS AND CONCLUSIONS: The sample included 284,945 adult ED patients, of whom 1195 experienced ACS. Among the latter, 978 (70%) were diagnosed with NSTEMI, 225 (16%) with STEMI, and 194 with UA (14%). We observed 79.1% of STEMI patients receiving ICU care. 14.4% among NSTEMI patients, and 9.3% among UA patients. NSTEMI patients' mean hospital LOS was 3.7 days. This was shorter than that of non-ACS patients 4.75 days and UA patients 2.99. In-hospital mortality for NSTEMI was 1.6%, compared to, 4.4% for those with STEMI patients and 0% for UA. There are recommendations for risk stratification among NSTEMI patients to evaluate risk for major adverse cardiac events (MACE) that can be used in the ED to guide admission decisions and use of ICU care, thus optimizing care for a majority of ACS patients.


Subject(s)
Acute Coronary Syndrome , Non-ST Elevated Myocardial Infarction , ST Elevation Myocardial Infarction , Adult , Humans , Acute Coronary Syndrome/diagnosis , Acute Coronary Syndrome/therapy , Non-ST Elevated Myocardial Infarction/diagnosis , Non-ST Elevated Myocardial Infarction/therapy , ST Elevation Myocardial Infarction/therapy , ST Elevation Myocardial Infarction/diagnosis , Risk Assessment , Emergency Service, Hospital , Hospitals
5.
Diagnostics (Basel) ; 13(12)2023 Jun 14.
Article in English | MEDLINE | ID: mdl-37370948

ABSTRACT

We compared four methods to screen emergency department (ED) patients for an early electrocardiogram (ECG) to diagnose ST-elevation myocardial infarction (STEMI) in a 5-year retrospective cohort through observed practice, objective application of screening protocol criteria, a predictive model, and a model augmenting human practice. We measured screening performance by sensitivity, missed acute coronary syndrome (ACS) and STEMI, and the number of ECGs required. Our cohort of 279,132 ED visits included 1397 patients who had a diagnosis of ACS. We found that screening by observed practice augmented with the model delivered the highest sensitivity for detecting ACS (92.9%, 95%CI: 91.4-94.2%) and showed little variation across sex, race, ethnicity, language, and age, demonstrating equity. Although it missed a few cases of ACS (7.6%) and STEMI (4.4%), it did require ECGs on an additional 11.1% of patients compared to current practice. Screening by protocol performed the worst, underdiagnosing young, Black, Native American, Alaskan or Hawaiian/Pacific Islander, and Hispanic patients. Thus, adding a predictive model to augment human practice improved the detection of ACS and STEMI and did so most equitably across the groups. Hence, combining human and model screening--rather than relying on either alone--may maximize ACS screening performance and equity.

7.
Circ Arrhythm Electrophysiol ; 16(6): e011143, 2023 06.
Article in English | MEDLINE | ID: mdl-37254747

ABSTRACT

BACKGROUND: With the advent of more intensive rhythm monitoring strategies, ventricular arrhythmias (VAs) are increasingly detected in Fontan patients. However, the prognostic implications of VA are poorly understood. We assessed the incidence of VA in Fontan patients and the implications on transplant-free survival. METHODS: Medical records of Fontan patients seen at a single center between 2002 and 2019 were reviewed to identify post-Fontan VA (nonsustained ventricular tachycardia >4 beats or sustained >30 seconds). Patients with preFontan VA were excluded. Hemodynamically unstable VA was defined as malignant VA. The primary outcome was death and heart transplantation. Death with censoring at transplant was a secondary outcome. RESULTS: Of 431 Fontan patients, transplant-free survival was 82% at 15 years post-Fontan with 64 (15%) meeting primary outcome of either death (n=16, 3.7%), at a median 4.6 (0.4-10.2) years post-Fontan, or transplant (n=48, 11%), at a median of 11.1 (5.9-16.2) years post-Fontan. Forty-eight (11%) patients were diagnosed with VA (90% nonsustained ventricular tachycardia, 10% sustained ventricular tachycardia). Malignant VA (n=9, 2.0%) was associated with younger age, worse systolic function, and valvular regurgitation. Risk for VA increased with time from Fontan, 2.4% at 10 years to 19% at 20 years. History of Stage 1 surgery with right ventricular to pulmonary artery conduit and older age at Fontan were significant risk factors for VA. VA was strongly associated with an increased risk of transplant or death (HR, 9.2 [95% CI, 4.5-18.7]; P<0.001), with a transplant-free survival of 48% at 5-year post-VA diagnosis. CONCLUSIONS: Ventricular arrhythmias occurred in 11% of Fontan patients and was highly associated with transplant or death, with a transplant-free survival of <50% at 5-year post-VA diagnosis. Risk factors for VA included older age at Fontan and history of right ventricular to pulmonary artery conduit. A diagnosis of VA in Fontan patients should prompt increased clinical surveillance.


Subject(s)
Fontan Procedure , Heart Defects, Congenital , Tachycardia, Ventricular , Humans , Fontan Procedure/adverse effects , Retrospective Studies , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/epidemiology , Arrhythmias, Cardiac/etiology , Pulmonary Artery/surgery , Tachycardia, Ventricular/diagnosis , Tachycardia, Ventricular/epidemiology , Tachycardia, Ventricular/etiology , Heart Defects, Congenital/complications , Heart Defects, Congenital/surgery , Heart Defects, Congenital/diagnosis , Treatment Outcome
8.
Crit Care Med ; 51(6): 731-741, 2023 06 01.
Article in English | MEDLINE | ID: mdl-37010317

ABSTRACT

OBJECTIVES: To determine whether implementation of an Emergency Critical Care Program (ECCP) is associated with improved survival and early downgrade of critically ill medical patients in the emergency department (ED). DESIGN: Single-center, retrospective cohort study using ED-visit data between 2015 and 2019. SETTING: Tertiary academic medical center. PATIENTS: Adult medical patients presenting to the ED with a critical care admission order within 12 hours of arrival. INTERVENTIONS: Dedicated bedside critical care for medical ICU patients by an ED-based intensivist following initial resuscitation by the ED team. MEASUREMENTS AND MAIN RESULTS: Primary outcomes were inhospital mortality and the proportion of patients downgraded to non-ICU status while in the ED within 6 hours of the critical care admission order (ED downgrade <6 hr). A difference-in-differences (DiD) analysis compared the change in outcomes for patients arriving during ECCP hours (2 pm to midnight, weekdays) between the preintervention period (2015-2017) and the intervention period (2017-2019) to the change in outcomes for patients arriving during non-ECCP hours (all other hours). Adjustment for severity of illness was performed using the emergency critical care Sequential Organ Failure Assessment (eccSOFA) score. The primary cohort included 2,250 patients. The DiDs for the eccSOFA-adjusted inhospital mortality decreased by 6.0% (95% CI, -11.9 to -0.1) with largest difference in the intermediate illness severity group (DiD, -12.2%; 95% CI, -23.1 to -1.3). The increase in ED downgrade less than 6 hours was not statistically significant (DiD, 4.8%; 95% CI, -0.7 to 10.3%) except in the intermediate group (DiD, 8.8%; 95% CI, 0.2-17.4). CONCLUSIONS: The implementation of a novel ECCP was associated with a significant decrease in inhospital mortality among critically ill medical ED patients, with the greatest decrease observed in patients with intermediate severity of illness. Early ED downgrades also increased, but the difference was statistically significant only in the intermediate illness severity group.


Subject(s)
Critical Care , Critical Illness , Adult , Humans , Retrospective Studies , Critical Illness/therapy , Emergency Service, Hospital , Hospitalization , Hospital Mortality , Intensive Care Units
9.
Res Sq ; 2023 Feb 28.
Article in English | MEDLINE | ID: mdl-36909637

ABSTRACT

Background: To investigate whether exposure history to two common loop diuretics affects the risk of developing Alzheimer's disease (AD) after accounting for socioeconomic status and congestive heart failure. Methods: Individuals exposed to bumetanide or furosemide were identified in the Stanford University electronic health record using the deidentified Observational Medical Outcomes Partnership platform. We matched the AD case cohort to a control cohort (1:20 case:control) on gender, race, ethnicity, hypertension and controlled for variables that could potentially be collinear with bumetanide exposure and/or AD diagnosis. Among individuals older than 65 years, 5,839 AD cases and 116,103 matched controls were included. A total of 1,759 patients (54 cases, 1,705 controls) were exposed to bumetanide. Results: After adjusting for socioeconomic status and other confounders, bumetanide exposure was significantly associated with reduced AD risk (odds ratio = 0.50; 95% confidence interval, 0.37-0.68; p = 9.9×10-6), while the most common loop diuretics, furosemide, was not associated with AD risk. Conclusion: Our study replicates in an independent sample that history of bumetanide exposure is associated with reduced risk of AD and emphasizes that this association is not confounded by difference in socioeconomic status, which was an important caveat given the cost difference between bumetanide and furosemide.

10.
Am J Emerg Med ; 67: 70-78, 2023 05.
Article in English | MEDLINE | ID: mdl-36806978

ABSTRACT

BACKGROUND: Chest pain (CP) is the hallmark symptom for acute coronary syndrome (ACS) but is not reported in 20-30% of patients, especially women, elderly, non-white patients, presenting to the emergency department (ED) with an ST-segment elevation myocardial infarction (STEMI). METHODS: We used a retrospective 5-year adult ED sample of 279,132 patients to explore using CP alone to predict ACS, then we incrementally added other ACS chief complaints, age, and sex in a series of multivariable logistic regression models. We evaluated each model's identification of ACS and STEMI. RESULTS: Using CP alone would recommend ECGs for 8% of patients (sensitivity, 61%; specificity, 92%) but missed 28.4% of STEMIs. The model with all variables identified ECGs for 22% of patients (sensitivity, 82%; specificity, 78%) but missed 14.7% of STEMIs. The model with CP and other ACS chief complaints had the highest sensitivity (93%) and specificity (55%), identified 45.1% of patients for ECG, and only missed 4.4% of STEMIs. CONCLUSION: CP alone had highest specificity but lacked sensitivity. Adding other ACS chief complaints increased sensitivity but identified 2.2-fold more patients for ECGs. Achieving an ECG in 10 min for patients with ACS to identify all STEMIs will be challenging without introducing more complex risk calculation into clinical care.


Subject(s)
Acute Coronary Syndrome , ST Elevation Myocardial Infarction , Adult , Humans , Female , Aged , ST Elevation Myocardial Infarction/diagnosis , Retrospective Studies , Electrocardiography , Chest Pain/diagnosis , Chest Pain/etiology , Acute Coronary Syndrome/complications , Acute Coronary Syndrome/diagnosis , Emergency Service, Hospital
11.
J Am Coll Emerg Physicians Open ; 3(6): e12867, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36570369

ABSTRACT

Objectives: Here we report the clinical performance of COVID-19 curbside screening with triage to a drive-through care pathway versus main emergency department (ED) care for ambulatory COVID-19 testing during a pandemic. Patients were evaluated from cars to prevent the demand for testing from spreading COVID-19 within the hospital. Methods: We examined the effectiveness of curbside screening to identify patients who would be tested during evaluation, patient flow from screening to care team evaluation and testing, and safety of drive-through care as 7-day ED revisits and 14-day hospital admissions. We also compared main ED efficiency versus drive-through care using ED length of stay (EDLOS). Standardized mean differences (SMD) >0.20 identify statistical significance. Results: Of 5931 ED patients seen, 2788 (47.0%) were walk-in patients. Of these patients, 1111 (39.8%) screened positive for potential COVID symptoms, of whom 708 (63.7%) were triaged to drive-through care (with 96.3% tested), and 403 (36.3%) triaged to the main ED (with 90.5% tested). The 1677 (60.2%) patients who screened negative were seen in the main ED, with 440 (26.2%) tested. Curbside screening sensitivity and specificity for predicting who ultimately received testing were 70.3% and 94.5%. Compared to the main ED, drive-through patients had fewer 7-day ED revisits (3.8% vs 12.5%, SMD = 0.321), fewer 14-day hospital readmissions (4.5% vs 15.6%, SMD = 0.37), and shorter EDLOS (0.56 vs 5.12 hours, SMD = 1.48). Conclusion: Curbside screening had high sensitivity, permitting early respiratory isolation precautions for most patients tested. Low ED revisit, hospital readmissions, and EDLOS suggest drive-through care, with appropriate screening, is safe and efficient for future respiratory illness pandemics.

12.
Transplant Cell Ther ; 28(10): 705.e1-705.e10, 2022 10.
Article in English | MEDLINE | ID: mdl-35872303

ABSTRACT

BACKGROUND: Bronchiolitis obliterans syndrome (BOS)-chronic graft-versus-host disease (cGVHD) affecting the lungs-is an uncommon complication of allogeneic hematopoietic cell transplant (HCT). The epidemiology and complications of lower respiratory tract infections (LRTIs) and community-acquired respiratory viruses (CARVs) in these patients are poorly understood. OBJECTIVES: We aim to characterize the epidemiology of LRTIs in patients with BOS complicating HCT. We also aim to explore the association of LRTIs and CARV detection on lung function in BOS patients. STUDY DESIGN: Adult patients with BOS at Stanford Health Care between January 2010 and December 2019 were included in this retrospective cohort study. LRTI diagnosis was based on combined clinical, microbiologic, and radiographic criteria, using consensus criteria where available. RESULTS: Fifty-five patients with BOS were included. BOS was diagnosed at a median of 19.2 (IQR 12.5-24.7) months after HCT, and patients were followed for a median of 29.3 (IQR 9.9-53.2) months from BOS diagnosis. Twenty-two (40%) patients died after BOS diagnosis; 17 patients died from complications of cGVHD (including respiratory failure and infection) and 5 died from relapsed disease. Thirty-four (61.8%) patients developed at least one LRTI. Viral LRTIs were most common, occurring in 29 (52.7%) patients, primarily due to rhinovirus. Bacterial LRTIs-excluding Nocardia and non-tuberculous mycobacteria (NTM)-were the second most common, occurring in 21 (38.2%) patients, mostly due to Pseudomonas aeruginosa. Fungal LRTIs, NTM, and nocardiosis occurred in 14 (25.5%), 10 (18.2%), and 4 (7.3%) patients, respectively. Median time to development of the first LRTI after BOS diagnosis was 15.3 (4.7-44.7) months. Twenty-six (76.5%) of the 34 patients who developed LRTIs had infections due to more than one type of organism-fungi, viruses, Nocardia, NTM, and other bacteria-over the observation period. Patients with at least one LRTI had significantly lower forced expiratory volume in one second percent predicted (FEV1%) (37% vs. 53%, p = 0.0096) and diffusing capacity of carbon monoxide (DLCO) (45.5% predicted vs. 69% predicted, p = 0.0001). Patients with at least one LRTI trended toward lower overall survival (OS) (p = 0.0899) and higher non-relapse mortality (NRM) (p = 0.2707). Patients with a CARV detected or LRTI diagnosed after BOS-compared to those without any CARV detected or LRTI diagnosed-were more likely to have a sustained drop in FEV1% from baseline of at least 10% (21 [61.8%] versus 7 [33.3%]) and a sustained drop in FEV1% of at least 30% (12 [36.4%] versus 2 [9.5%]). CONCLUSIONS: LRTIs are common in BOS and associated with lower FEV1%, lower DLCO, and a trend toward decreased OS and higher NRM. Patients with LRTIs or CARVs (even absent lower respiratory tract involvement) were more likely to have substantial declines in FEV1% over time than those without. The array of organisms-including P. aeruginosa, mold, Nocardia, NTM, and CARVs-seen in BOS reflects the unique pathophysiology of this form of cGVHD, involving both systemic immunodeficiency and structural lung disease. These patterns of LRTIs and their outcomes can be used to guide clinical decisions and inform future research.


Subject(s)
Bronchiolitis Obliterans , Graft vs Host Disease , Hematopoietic Stem Cell Transplantation , Respiratory Tract Infections , Adult , Bronchiolitis Obliterans/epidemiology , Carbon Monoxide , Graft vs Host Disease/complications , Hematopoietic Stem Cell Transplantation/adverse effects , Humans , Respiratory Tract Infections/epidemiology , Retrospective Studies , Rhinovirus , Syndrome
13.
Lifetime Data Anal ; 17(2): 175-94, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21153764

ABSTRACT

Case-control family data are now widely used to examine the role of gene-environment interactions in the etiology of complex diseases. In these types of studies, exposure levels are obtained retrospectively and, frequently, information on most risk factors of interest is available on the probands but not on their relatives. In this work we consider correlated failure time data arising from population-based case-control family studies with missing genotypes of relatives. We present a new method for estimating the age-dependent marginalized hazard function. The proposed technique has two major advantages: (1) it is based on the pseudo full likelihood function rather than a pseudo composite likelihood function, which usually suffers from substantial efficiency loss; (2) the cumulative baseline hazard function is estimated using a two-stage estimator instead of an iterative process. We assess the performance of the proposed methodology with simulation studies, and illustrate its utility on a real data example.


Subject(s)
BRCA2 Protein/genetics , Breast Neoplasms/genetics , Models, Genetic , Models, Statistical , Ubiquitin-Protein Ligases/genetics , Adult , Case-Control Studies , Computer Simulation , Family , Female , Genotype , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL