RESUMEN
BACKGROUND: Postoperative delirium is frequent in older adults and is associated with postoperative neurocognitive disorder (PND). Studies evaluating perioperative medication use and delirium have generally evaluated medications in aggregate and been poorly controlled; the association between perioperative medication use and PND remains unclear. We sought to evaluate the association between medication use and postoperative delirium and PND in older adults undergoing major elective surgery. METHODS: This is a secondary analysis of a prospective cohort study of adults ≥70 years without dementia undergoing major elective surgery. Patients were interviewed preoperatively to determine home medication use. Postoperatively, daily hospital use of 7 different medication classes listed in guidelines as risk factors for delirium was collected; administration before delirium was verified. While hospitalized, patients were assessed daily for delirium using the Confusion Assessment Method and a validated chart review method. Cognition was evaluated preoperatively and 1 month after surgery using a neurocognitive battery. The association between prehospital medication use and postoperative delirium was assessed using a generalized linear model with a log link function, controlling for age, sex, type of surgery, Charlson comorbidity index, and baseline cognition. The association between daily postoperative medication use (when class exposure ≥5%) and time to delirium was assessed using time-varying Cox models adjusted for age, sex, surgery type, Charlson comorbidity index, Acute Physiology and Chronic Health Evaluation (APACHE)-II score, and baseline cognition. Mediation analysis was utilized to evaluate the association between medication use, delirium, and cognitive change from baseline to 1 month. RESULTS: Among 560 patients enrolled, 134 (24%) developed delirium during hospitalization. The multivariable analyses revealed no significant association between prehospital benzodiazepine (relative risk [RR], 1.44; 95% confidence interval [CI], 0.85-2.44), beta-blocker (RR, 1.38; 95% CI, 0.94-2.05), NSAID (RR, 1.12; 95% CI, 0.77-1.62), opioid (RR, 1.22; 95% CI, 0.82-1.82), or statin (RR, 1.34; 95% CI, 0.92-1.95) exposure and delirium. Postoperative hospital benzodiazepine use (adjusted hazard ratio [aHR], 3.23; 95% CI, 2.10-4.99) was associated with greater delirium. Neither postoperative hospital antipsychotic (aHR, 1.48; 95% CI, 0.74-2.94) nor opioid (aHR, 0.82; 95% CI, 0.62-1.11) use before delirium was associated with delirium. Antipsychotic use (either presurgery or postsurgery) was associated with a 0.34 point (standard error, 0.16) decrease in general cognitive performance at 1 month through its effect on delirium (P = .03), despite no total effect being observed. CONCLUSIONS: Administration of benzodiazepines to older adults hospitalized after major surgery is associated with increased postoperative delirium. Association between inhospital, postoperative medication use and cognition at 1 month, independent of delirium, was not detected.
Asunto(s)
Antipsicóticos , Delirio , Anciano , Analgésicos Opioides , Benzodiazepinas , Cognición , Delirio/inducido químicamente , Delirio/diagnóstico , Delirio/epidemiología , Humanos , Complicaciones Posoperatorias/diagnóstico , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/etiología , Estudios Prospectivos , Factores de RiesgoRESUMEN
Rationale: It is unclear whether opioid use increases the risk of ICU delirium. Prior studies have not accounted for confounding, including daily severity of illness, pain, and competing events that may preclude delirium detection.Objectives: To evaluate the association between ICU opioid exposure, opioid dose, and delirium occurrence.Methods: In consecutive adults admitted for more than 24 hours to the ICU, daily mental status was classified as awake without delirium, delirium, or unarousable. A first-order Markov model with multinomial logistic regression analysis considered four possible next-day outcomes (i.e., awake without delirium, delirium, unarousable, and ICU discharge or death) and 11 delirium-related covariables (baseline: admission type, age, sex, Acute Physiology and Chronic Health Evaluation IV score, and Charlson comorbidity score; daily: ICU day, modified Sequential Organ Failure Assessment, ventilation use, benzodiazepine use, and severe pain). This model was used to quantify the association between opioid use, opioid dose, and delirium occurrence the next day.Measurements and Main Results: The 4,075 adults had 26,250 ICU days; an opioid was administered on 57.0% (n = 14,975), severe pain occurred on 7.0% (n = 1,829), and delirium occurred on 23.5% (n = 6,176). Severe pain was inversely associated with a transition to delirium (odds ratio [OR] 0.72; 95% confidence interval [CI], 0.53-0.97). Any opioid administration in awake patients without delirium was associated with an increased risk for delirium the next day [OR, 1.45; 95% CI, 1.24-1.69]. Each daily 10-mg intravenous morphine-equivalent dose was associated with a 2.4% increased risk for delirium the next day.Conclusions: The receipt of an opioid in the ICU increases the odds of transitioning to delirium in a dose-dependent fashion.
Asunto(s)
Analgésicos Opioides/efectos adversos , Analgésicos Opioides/uso terapéutico , Enfermedad Crítica/terapia , Delirio/inducido químicamente , Dolor/tratamiento farmacológico , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Países Bajos , Oportunidad Relativa , Estudios Prospectivos , Factores de RiesgoRESUMEN
OBJECTIVES: Haloperidol is commonly administered in the ICU to reduce the burden of delirium and its related symptoms despite no clear evidence showing haloperidol helps to resolve delirium or improve survival. We evaluated the association between haloperidol, when used to treat incident ICU delirium and its symptoms, and mortality. DESIGN: Post hoc cohort analysis of a randomized, double-blind, placebo-controlled, delirium prevention trial. SETTING: Fourteen Dutch ICUs between July 2013 and December 2016. PATIENTS: One-thousand four-hundred ninety-five critically ill adults free from delirium at ICU admission having an expected ICU stay greater than or equal to 2 days. INTERVENTIONS: Patients received preventive haloperidol or placebo for up to 28 days until delirium occurrence, death, or ICU discharge. If delirium occurred, treatment with open-label IV haloperidol 2 mg tid (up to 5 mg tid per delirium symptoms) was administered at clinician discretion. MEASUREMENTS AND MAIN RESULTS: Patients were evaluated tid for delirium and coma for 28 days. Time-varying Cox hazards models were constructed for 28-day and 90-day mortality, controlling for study-arm, delirium and coma days, age, Acute Physiology and Chronic Health Evaluation-II score, sepsis, mechanical ventilation, and ICU length of stay. Among the 1,495 patients, 542 (36%) developed delirium within 28 days (median [interquartile range] with delirium 4 d [2-7 d]). A total of 477 of 542 (88%) received treatment haloperidol (2.1 mg [1.0-3.8 mg] daily) for 6 days (3-11 d). Each milligram of treatment haloperidol administered daily was associated with decreased mortality at 28 days (hazard ratio, 0.93; 95% CI, 0.91-0.95) and 90 days (hazard ratio, 0.97; 95% CI, 0.96-0.98). Treatment haloperidol administered later in the ICU course was less protective of death. Results were stable by prevention study-arm, predelirium haloperidol exposure, and haloperidol treatment protocol adherence. CONCLUSIONS: Treatment of incident delirium and its symptoms with haloperidol may be associated with a dose-dependent improvement in survival. Future randomized trials need to confirm these results.
Asunto(s)
Antipsicóticos/uso terapéutico , Cuidados Críticos/métodos , Enfermedad Crítica/terapia , Delirio/tratamiento farmacológico , Haloperidol/uso terapéutico , Adulto , Anciano , Enfermedad Crítica/mortalidad , Delirio/mortalidad , Femenino , Humanos , Unidades de Cuidados Intensivos , Tiempo de Internación/estadística & datos numéricos , Masculino , Persona de Mediana Edad , Países Bajos , Análisis de SupervivenciaRESUMEN
There were some errors in the variables in this paper.
RESUMEN
BACKGROUND: Although previous research has demonstrated high rates of inappropriate diagnostic imaging, the potential influence of several physician-level characteristics is not well established. OBJECTIVE: To examine the influence of three types of physician characteristics on inappropriate imaging: experience, specialty training, and self-referral. DESIGN: A retrospective analysis of over 70,000 MRI claims submitted for commercially insured individuals. Physician characteristics were identified through a combination of administrative records and primary data collection. Multi-level modeling was used to assess relationships between physician characteristics and inappropriate MRIs. SETTING: Massachusetts PARTICIPANTS: Commercially insured individuals who received an MRI between 2010 and 2013 for one of three conditions: low back pain, knee pain, and shoulder pain. MEASUREMENTS: Guidelines from the American College of Radiology were used to classify MRI referrals as appropriate/inappropriate. Experience was measured from the date of medical school graduation. Specialty training comprised three principal groups: general internal medicine, family medicine, and orthopedics. Two forms of self-referral were examined: (a) the same physician who ordered the procedure also performed it, and (b) the physicians who ordered and performed the procedure were members of the same group practice and the procedure was performed outside the hospital setting. RESULTS: Approximately 23% of claims were classified as inappropriate. Physicians with 10 or less years of experience had significantly higher odds of ordering inappropriate MRIs. Primary care physicians were almost twice as likely to order an inappropriate MRI as orthopedists. Self-referral was not associated with higher rates of inappropriate MRIs. LIMITATIONS: Classification of MRIs was conducted with claims data. Not all self-referred MRIs could be detected. CONCLUSIONS: Inappropriate imaging continues to be a driver of wasteful health care spending. Both physician experience and specialty training were highly associated with inappropriate imaging.
Asunto(s)
Dolor de la Región Lumbar , Derivación y Consulta , Humanos , Imagen por Resonancia Magnética , Massachusetts , Pautas de la Práctica en Medicina , Estudios RetrospectivosRESUMEN
BACKGROUND: While delirium prevalence and duration are each associated with increased 30-day, 6-month, and 1-year mortality, the association between incident ICU delirium and mortality remains unclear. We evaluated the association between both incident ICU delirium and days spent with delirium in the 28 days after ICU admission and mortality within 28 and 90 days. METHODS: Secondary cohort analysis of a randomized, double-blind, placebo-controlled trial conducted among 1495 delirium-free, critically ill adults in 14 Dutch ICUs with an expected ICU stay ≥2 days where all delirium assessments were completed. In the 28 days after ICU admission, patients were evaluated for delirium and coma 3x daily; each day was coded as a delirium day [≥1 positive Confusion Assessment Method for the ICU (CAM-ICU)], a coma day [no delirium and ≥ 1 Richmond Agitation Sedation Scale (RASS) score ≤ - 4], or neither. Four Cox-regression models were constructed for 28-day mortality and 90-day mortality; each accounted for potential confounders (i.e., age, APACHE-II score, sepsis, use of mechanical ventilation, ICU length of stay, and haloperidol dose) and: 1) delirium occurrence, 2) days spent with delirium, 3) days spent in coma, and 4) days spent with delirium and/or coma. RESULTS: Among the 1495 patients, 28 day mortality was 17% and 90 day mortality was 21%. Neither incident delirium (28 day mortality hazard ratio [HR] = 1.02, 95%CI = 0.75-1.39; 90 day mortality HR = 1.05, 95%CI = 0.79-1.38) nor days spent with delirium (28 day mortality HR = 1.00, 95%CI = 0.95-1.05; 90 day mortality HR = 1.02, 95%CI = 0.98-1.07) were significantly associated with mortality. However, both days spent with coma (28 day mortality HR = 1.05, 95%CI = 1.02-1.08; 90 day mortality HR = 1.05, 95%CI = 1.02-1.08) and days spent with delirium or coma (28 day mortality HR = 1.03, 95%CI = 1.00-1.05; 90 day mortality HR = 1.03, 95%CI = 1.01-1.06) were significantly associated with mortality. CONCLUSIONS: This analysis suggests neither incident delirium nor days spent with delirium are associated with short-term mortality after ICU admission. TRIAL REGISTRATION: ClinicalTrials.gov, Identifier NCT01785290 Registered 7 February 2013.
Asunto(s)
Delirio/complicaciones , Mortalidad/tendencias , Anciano , Estudios de Cohortes , Enfermedad Crítica/epidemiología , Enfermedad Crítica/mortalidad , Delirio/epidemiología , Delirio/mortalidad , Método Doble Ciego , Femenino , Humanos , Unidades de Cuidados Intensivos/organización & administración , Unidades de Cuidados Intensivos/estadística & datos numéricos , Masculino , Persona de Mediana Edad , Países Bajos/epidemiología , Prevalencia , Modelos de Riesgos ProporcionalesRESUMEN
OBJECTIVES: Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer screening. Risk stratification for advanced colorectal neoplasia (ACN) might facilitate more effective shared decision making when selecting an appropriate screening option. Our objective was to develop and validate a clinical index for estimating the probability of ACN at screening colonoscopy. METHODS: We conducted a cross-sectional analysis of 3,543 asymptomatic, mostly average-risk patients 50-79 years of age undergoing screening colonoscopy at two urban safety net hospitals. Predictors of ACN were identified using multiple logistic regression. Model performance was internally validated using bootstrapping methods. RESULTS: The final index consisted of five independent predictors of risk (age, smoking, alcohol intake, height, and a combined sex/race/ethnicity variable). Smoking was the strongest predictor (net reclassification improvement (NRI), 8.4%) and height the weakest (NRI, 1.5%). Using a simplified weighted scoring system based on 0.5 increments of the adjusted odds ratio, the risk of ACN ranged from 3.2% (95% confidence interval (CI), 2.6-3.9) for the low-risk group (score ≤2) to 8.6% (95% CI, 7.4-9.7) for the intermediate/high-risk group (score 3-11). The model had moderate to good overall discrimination (C-statistic, 0.69; 95% CI, 0.66-0.72) and good calibration (P=0.73-0.93). CONCLUSIONS: A simple 5-item risk index based on readily available clinical data accurately stratifies average-risk patients into low- and intermediate/high-risk categories for ACN at screening colonoscopy. Uptake into clinical practice could facilitate more effective shared decision-making for CRC screening, particularly in situations where patient and provider test preferences differ.
Asunto(s)
Colonoscopía , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/epidemiología , Detección Precoz del Cáncer/métodos , Tamizaje Masivo/métodos , Fumar/efectos adversos , Anciano , Neoplasias Colorrectales/etiología , Neoplasias Colorrectales/patología , Estudios Transversales , Femenino , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Oportunidad Relativa , Valor Predictivo de las Pruebas , Medición de Riesgo , Factores de RiesgoRESUMEN
BACKGROUND: End-stage renal disease carries a prognosis similar to cancer yet only 20 % of end-stage renal disease patients are referred to hospice. Furthermore, conversations between dialysis team members and patients about end-of-life planning are uncommon. Lack of provider training about how to communicate prognostic data may contribute to the limited number of end-of-life care discussions that take place with this chronically ill population. In this study, we will test the Shared Decision-Making Renal Supportive Care communication intervention to systematically elicit patient and caretaker preferences for end-of-life care so that care concordant with patients' goals can be provided. METHODS/DESIGN: This multi-center study will deploy an intervention to improve end-of-life communication for hemodialysis patients who are at high risk of death in the ensuing six months. The intervention will be carried out as a prospective cohort with a retrospective cohort serving as the comparison group. Patients will be recruited from 16 dialysis units associated with two large academic centers in Springfield, Massachusetts and Albuquerque, New Mexico. Critical input from patient advisory boards, a stakeholder panel, and initial qualitative analysis of patient and caretaker experiences with advance care planning have informed the communication intervention. Rigorous communication training for hemodialysis social workers and providers will ensure that standardized study procedures are performed at each dialysis unit. Nephrologists and social workers will communicate prognosis and provide advance care planning in face-to-face encounters with patients and families using a social work-centered algorithm. Study outcomes including frequency and timing of hospice referrals, patient and caretaker satisfaction, quality of end-of-life discussions, and quality of death will be assessed over an 18 month period. DISCUSSION: The Shared Decision-Making Renal Supportive Care Communication intervention intends to improve discussions about prognosis and end-of-life care with end-stage renal disease patients. We anticipate that the intervention will help guide hemodialysis staff and providers to effectively participate in advance care planning for patients and caretakers to establish preferences and goals at the end of life. TRIAL REGISTRATION: NCT02405312.
Asunto(s)
Planificación Anticipada de Atención/organización & administración , Fallo Renal Crónico/psicología , Diálisis Renal/psicología , Proyectos de Investigación , Cuidado Terminal/organización & administración , Anciano , Comunicación , Toma de Decisiones , Femenino , Humanos , Fallo Renal Crónico/terapia , Masculino , Persona de Mediana Edad , Participación del Paciente , Relaciones Médico-Paciente , Pronóstico , Cuidado Terminal/psicologíaRESUMEN
BACKGROUND: Experimental studies suggest that metabolic myocardial support by intravenous (IV) glucose, insulin, and potassium (GIK) reduces ischemia-induced arrhythmias, cardiac arrest, mortality, progression from unstable angina pectoris to acute myocardial infarction (AMI), and myocardial infarction size. However, trials of hospital administration of IV GIK to patients with ST-elevation myocardial infarction (STEMI) have generally not shown favorable effects possibly because of the GIK intervention taking place many hours after ischemic symptom onset. A trial of GIK used in the very first hours of ischemia has been needed, consistent with the timing of benefit seen in experimental studies. OBJECTIVE: The IMMEDIATE Trial tested whether, if given very early, GIK could have the impact seen in experimental studies. Accordingly, distinct from prior trials, IMMEDIATE tested the impact of GIK (1) in patients with acute coronary syndromes (ACS), rather than only AMI or STEMI, and (2) administered in prehospital emergency medical service settings, rather than later, in hospitals, after emergency department evaluation. DESIGN: The IMMEDIATE Trial was an emergency medical service-based randomized placebo-controlled clinical effectiveness trial conducted in 13 cities across the United States that enrolled 911 participants. Eligible were patients 30 years or older for whom a paramedic performed a 12-lead electrocardiogram to evaluate chest pain or other symptoms suggestive of ACS for whom electrocardiograph-based acute cardiac ischemia time-insensitive predictive instrument indicated a ≥75% probability of ACS, and/or the thrombolytic predictive instrument indicated the presence of a STEMI, or if local criteria for STEMI notification of receiving hospitals were met. Prehospital IV GIK or placebo was started immediately. Prespecified were the primary end point of progression of ACS to infarction and, as major secondary end points, the composite of cardiac arrest or in-hospital mortality, 30-day mortality, and the composite of cardiac arrest, 30-day mortality, or hospitalization for heart failure. Analyses were planned on an intent-to-treat basis, on a modified intent-to-treat group who were confirmed in emergency departments to have ACS, and for participants presenting with STEMI. CONCLUSION: The IMMEDIATE Trial tested whether GIK, when administered as early as possible in the course of ACS by paramedics using acute cardiac ischemia time-insensitive predictive instrument and thrombolytic predictive instrument decision support, would reduce progression to AMI, mortality, cardiac arrest, and heart failure. It also tested whether it would provide clinical and pathophysiologic information on GIK's biological mechanisms.
Asunto(s)
Síndrome Coronario Agudo/tratamiento farmacológico , Servicios Médicos de Urgencia/métodos , Miocardio/metabolismo , Síndrome Coronario Agudo/diagnóstico , Síndrome Coronario Agudo/mortalidad , Adulto , Soluciones Cardiopléjicas , Relación Dosis-Respuesta a Droga , Método Doble Ciego , Electrocardiografía , Estudios de Seguimiento , Glucosa/administración & dosificación , Humanos , Infusiones Intravenosas , Insulina/administración & dosificación , Potasio/administración & dosificación , Tasa de Supervivencia/tendencias , Factores de Tiempo , Tomografía Computarizada de Emisión de Fotón Único , Resultado del Tratamiento , Estados Unidos/epidemiologíaRESUMEN
CONTEXT: Laboratory studies suggest that in the setting of cardiac ischemia, immediate intravenous glucose-insulin-potassium (GIK) reduces ischemia-related arrhythmias and myocardial injury. Clinical trials have not consistently shown these benefits, possibly due to delayed administration. OBJECTIVE: To test out-of hospital emergency medical service (EMS) administration of GIK in the first hours of suspected acute coronary syndromes (ACS). DESIGN, SETTING, AND PARTICIPANTS: Randomized, placebo-controlled, double-blind effectiveness trial in 13 US cities (36 EMS agencies), from December 2006 through July 31, 2011, in which paramedics, aided by electrocardiograph (ECG)-based decision support, randomized 911 (871 enrolled) patients (mean age, 63.6 years; 71.0% men) with high probability of ACS. INTERVENTION: Intravenous GIK solution (n = 411) or identical-appearing 5% glucose placebo (n = 460) administered by paramedics in the out-of-hospital setting and continued for 12 hours. MAIN OUTCOME MEASURES: The prespecified primary end point was progression of ACS to myocardial infarction (MI) within 24 hours, as assessed by biomarkers and ECG evidence. Prespecified secondary end points included survival at 30 days and a composite of prehospital or in-hospital cardiac arrest or in-hospital mortality, analyzed by intent-to-treat and by presentation with ST-segment elevation. RESULTS: There was no significant difference in the rate of progression to MI among patients who received GIK (n = 200; 48.7%) vs those who received placebo (n = 242; 52.6%) (odds ratio [OR], 0.88; 95% CI, 0.66-1.13; P = .28). Thirty-day mortality was 4.4% with GIK vs 6.1% with placebo (hazard ratio [HR], 0.72; 95% CI, 0.40-1.29; P = .27). The composite of cardiac arrest or in-hospital mortality occurred in 4.4% with GIK vs 8.7% with placebo (OR, 0.48; 95% CI, 0.27-0.85; P = .01). Among patients with ST-segment elevation (163 with GIK and 194 with placebo), progression to MI was 85.3% with GIK vs 88.7% with placebo (OR, 0.74; 95% CI, 0.40-1.38; P = .34); 30-day mortality was 4.9% with GIK vs 7.7% with placebo (HR, 0.63; 95% CI, 0.27-1.49; P = .29). The composite outcome of cardiac arrest or in-hospital mortality was 6.1% with GIK vs 14.4% with placebo (OR, 0.39; 95% CI, 0.18-0.82; P = .01). Serious adverse events occurred in 6.8% (n = 28) with GIK vs 8.9% (n = 41) with placebo (P = .26). CONCLUSIONS: Among patients with suspected ACS, out-of-hospital administration of intravenous GIK, compared with glucose placebo, did not reduce progression to MI. Compared with placebo, GIK administration was not associated with improvement in 30-day survival but was associated with lower rates of the composite outcome of cardiac arrest or in-hospital mortality. TRIAL REGISTRATION: clinicaltrials.gov Identifier: NCT00091507.
Asunto(s)
Síndrome Coronario Agudo/tratamiento farmacológico , Soluciones Cardiopléjicas/uso terapéutico , Infarto del Miocardio/prevención & control , Síndrome Coronario Agudo/mortalidad , Anciano , Técnicos Medios en Salud , Angina Inestable/complicaciones , Angina Inestable/tratamiento farmacológico , Técnicas de Apoyo para la Decisión , Método Doble Ciego , Electrocardiografía , Servicios Médicos de Urgencia , Femenino , Glucosa/uso terapéutico , Paro Cardíaco/prevención & control , Mortalidad Hospitalaria , Humanos , Insulina/uso terapéutico , Masculino , Persona de Mediana Edad , Infarto del Miocardio/etiología , Oportunidad Relativa , Potasio/uso terapéutico , Análisis de Supervivencia , Resultado del TratamientoRESUMEN
BACKGROUND: Abnormalities in bone mineral metabolism parameters are common in patients with end-stage kidney disease on dialysis therapy. The National Kidney Foundation's Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines propose targets for calcium, phosphate, and intact parathyroid hormone (iPTH) levels in patients undergoing dialysis. However, whether achievement of these targets improves survival is unknown. STUDY DESIGN: Retrospective cohort study. SETTING & PARTICIPANTS: Incident patients on hemodialysis or peritoneal dialysis therapy in the United Kingdom from 2000-2004 who survived at least 12 months. PREDICTOR: Achievement of KDOQI calcium, phosphate, and iPTH guideline targets during the first year of dialysis therapy. OUTCOMES: All-cause mortality in the subsequent 2 years. MEASUREMENTS: Calcium, phosphate, and iPTH at quarterly intervals, demographic and comorbid condition data at baseline. RESULTS: We included 7,076 incident patients (4,947 hemodialysis, 2,129 peritoneal dialysis) in our analysis. Approximately two thirds of patients were men and 21% had diabetes as the cause of kidney failure. Guideline target achievement for each quarter varied from 23%-26% for iPTH level, 43%-47% for calcium level, and 54%-62% for phosphate level targets. In adjusted Cox proportional hazards models, patients who achieved guideline targets in all 4 quarters did not have a survival advantage over patients who never achieved target (P > 0.1 for calcium, phosphate, and iPTH). LIMITATIONS: Missing information about medication use, vitamin D and alkaline phosphatase levels, and dialysate calcium content. CONCLUSIONS: Our findings do not support the use of KDOQI bone mineral guideline achievement as a quality measure for dialysis care. Prospective studies with longer term follow-up are needed to define the optimal cutoff values for calcium, phosphate, and iPTH and assess the effect of guideline implementation on patient survival.
Asunto(s)
Enfermedades Óseas Metabólicas/etiología , Fallo Renal Crónico/epidemiología , Guías de Práctica Clínica como Asunto , Sistema de Registros , Diálisis Renal/normas , Anciano , Enfermedades Óseas Metabólicas/sangre , Enfermedades Óseas Metabólicas/prevención & control , Calcio/sangre , Femenino , Estudios de Seguimiento , Humanos , Incidencia , Fallo Renal Crónico/complicaciones , Fallo Renal Crónico/terapia , Masculino , Persona de Mediana Edad , Hormona Paratiroidea/sangre , Fosfatos/sangre , Estudios Retrospectivos , Reino Unido/epidemiologíaRESUMEN
BACKGROUND: The risk of death in dialysis patients is high, but varies significantly among patients. No prediction tool is used widely in current clinical practice. We aimed to predict long-term mortality in incident dialysis patients using easily obtainable variables. STUDY DESIGN: Prospective nationwide multicenter cohort study in the United Kingdom (UK Renal Registry); models were developed using Cox proportional hazards. SETTING & PARTICIPANTS: Patients initiating hemodialysis or peritoneal dialysis therapy in 2002-2004 who survived at least 3 months on dialysis treatment were followed up for 3 years. Analyses were restricted to participants for whom information for comorbid conditions and laboratory measurements were available (n = 5,447). The data set was divided into data sets for model development (n = 3,631; training) and validation (n = 1,816) using random selection. PREDICTORS: Basic patient characteristics, comorbid conditions, and laboratory variables. OUTCOMES: All-cause mortality censored for kidney transplant, recovery of kidney function, and loss to follow-up. RESULTS: In the training data set, 1,078 patients (29.7%) died within the observation period. The final model for the training data set included patient characteristics (age, race, primary kidney disease, and treatment modality), comorbid conditions (diabetes, history of cardiovascular disease, and smoking), and laboratory variables (hemoglobin, serum albumin, creatinine, and calcium levels); reached a C statistic of 0.75 (95% CI, 0.73-0.77); and could discriminate accurately among patients with low (6%), intermediate (19%), high (33%), and very high (59%) mortality risk. The model was applied further to the validation data set and achieved a C statistic of 0.73 (95% CI, 0.71-0.76). LIMITATIONS: Number of missing comorbidity data and lack of an external validation data set. CONCLUSIONS: Basic patient characteristics, comorbid conditions, and laboratory variables can predict 3-year mortality in incident dialysis patients with sufficient accuracy. Identification of subgroups of patients according to mortality risk can guide future research and subsequently target treatment decisions in individual patients.
Asunto(s)
Fallo Renal Crónico/terapia , Sistema de Registros , Diálisis Renal/mortalidad , Anciano , Causas de Muerte/tendencias , Femenino , Estudios de Seguimiento , Humanos , Fallo Renal Crónico/mortalidad , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Factores de Riesgo , Tasa de Supervivencia/tendencias , Reino Unido/epidemiologíaRESUMEN
BACKGROUND: A challenge for emergency medical service (EMS) is accurate identification of acute coronary syndromes (ACS) and ST-segment elevation myocardial infarction (STEMI) for immediate treatment and transport. The electrocardiograph-based acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI) and the thrombolytic predictive instrument (TPI) have been shown to improve diagnosis and treatment in emergency departments (EDs), but their use by paramedics in the community has been less studied. OBJECTIVE: To identify candidates for participation in the Immediate Myocardial Metabolic Enhancement During Initial Assessment and Treatment in Emergency Care (IMMEDIATE) Trial, we implemented EMS use of the ACI-TIPI and the TPI in out-of-hospital electrocardiographs and evaluated its impact on paramedic on-site identification of ACS and STEMI as a community-based approach to improving emergency cardiac care. METHODS: Ambulances in the study municipalities were outfitted with electrocardiographs with ACI-TIPI and TPI software. Using a before-after quasi-experimental design, in Phase 1, for seven months, paramedics were provided with the ACI-TIPI/TPI continuous 0-100% predictions automatically printed on electrocardiogram (ECG) text headers to supplement their identification of ACS; in Phase 2, for 11 months, paramedics were told to identify ACS based on an ACI-TIPI cutoff probability of ACS ≥ 75% and/or TPI detection of STEMI. In Phase 3, this cutoff approach was used in seven additional municipalities. Confirmed diagnoses of ACS, acute myocardial infarction (AMI), and STEMI were made by blinded physician review for 100% of patients. RESULTS: In Phase 1, paramedics identified 107 patients as having ACS; in Phase 2, 104. In Phase 1, 45.8% (49) of patients so identified had ACS confirmed, which increased to 76.0% (79) in Phase 2 (p < 0.001). Of those with ACS, 59.2% (29) had AMI in Phase 1 versus 84.8% (67) with AMI in Phase 2 (p < 0.01), and STEMI was confirmed in 40.8% (20) versus 68.4% (54), respectively (p < 0.01). In Phase 3, of 226 patients identified by paramedics as having ACS, 74.3% (168) had ACS confirmed, of whom 81.0% (136) had AMI and 65.5% (110) had STEMI. Among patients with ACS, the proportion who received percutaneous coronary intervention (PCI) was 30.6% (15) in Phase 1, increasing to 57.0% (45) in Phase 2 (p < 0.004) and 50.6% (85) in Phase 3, and the proportions of patients with STEMI receiving PCI rose from 75.0% (15) to 83.3% (45) (p < 0.4) and 82.7% (91). CONCLUSIONS: In a wide range of EMS systems, use of electrocardiographs with ACI-TIPI and TPI decision support using a 75% ACI-TIPI cutoff improves paramedic diagnostic performance for ACS, AMI, and STEMI and increases the proportions of patients who receive PCI.
Asunto(s)
Síndrome Coronario Agudo/diagnóstico , Diagnóstico por Computador/instrumentación , Electrocardiografía/instrumentación , Auxiliares de Urgencia/estadística & datos numéricos , Servicio de Urgencia en Hospital/estadística & datos numéricos , Infarto del Miocardio/diagnóstico , Síndrome Coronario Agudo/tratamiento farmacológico , Síndrome Coronario Agudo/terapia , Dolor en el Pecho , Distribución de Chi-Cuadrado , Sistemas de Apoyo a Decisiones Clínicas , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Femenino , Indicadores de Salud , Humanos , Masculino , Persona de Mediana Edad , Infarto del Miocardio/tratamiento farmacológico , Infarto del Miocardio/terapia , Valor Predictivo de las Pruebas , Sensibilidad y Especificidad , Programas Informáticos , Factores de TiempoRESUMEN
Social determinants of health may affect ICU outcome, but the association between social determinants of health and delirium remains unclear. We evaluated the association between three social determinants of health and delirium occurrence and duration in critically ill adults. DESIGN: Secondary, subgroup analysis of a cohort study. SETTING: Single, 36-bed mixed medical-surgical ICU in the Netherlands. PATIENTS: Nine hundred fifty-six adults consecutively admitted from July 2016 to February 2020. Patients admitted after elective surgery, residing in a nursing home, or not expected to survive greater than or equal to 48 hours were excluded. INTERVENTION: None. MEASUREMENTS AND MAIN RESULTS: Four factors related to three Center for Disease Control social determinants of health domains (social/community context [ethnicity], education access/quality [educational level], and economic stability [employment status and monthly income]) were collected at ICU admission from patients (or families). Well-trained ICU nurses evaluated patients without coma (Richmond Agitation Sedation Scale, -4, -5) and with the Confusion Assessment Method-ICU and/or a delirium day was defined by greater than or equal to 1 + Confusion Assessment Method-ICU and/or scheduled antipsychotic use. Multivariable logistic regression models controlling for ICU days and 10 delirium risk variables (before-ICU: age, Charlson, cognitive impairment, any antidepressant, antipsychotic, or benzodiazepine use; ICU baseline: Acute Physiology and Chronic Health Evaluation IV and admission type; daily ICU: Sequential Organ Failure Assessment, restraint use, coma, benzodiazepine, or opioid use) evaluated associations between each social determinant of health factor and both ICU delirium occurrence and duration. Delirium occurred in 393/956 patients (45.4%) for 2 days (1-5 d). Patients with low (vs high) income had more ICU delirium (p = 0.05). Multivariate analyses revealed no social determinants of health to be significantly associated with increased delirium occurrence or duration. Low (vs high) income was weakly associated with increased delirium occurrence (adjusted odds ratio, 1.83; 95% CI, 0.91-3.89). Low (vs high) education (adjusted relative risk, 1.21; 95% CI, 0.97-1.53) was weakly associated with a longer delirium duration. CONCLUSIONS: Social determinants of health did not affect ICU delirium in one Dutch region. Additional research across different countries/regions and where additional social determinants of health are considered is needed to define the association between social determinants of health and ICU delirium.
RESUMEN
BACKGROUND: Both depression and cognitive impairment are common in hemodialysis patients, are associated with adverse clinical outcomes, and place an increased burden on health care resources. STUDY DESIGN: Cross-sectional cohort. SETTING & PARTICIPANTS: 241 maintenance hemodialysis patients in the Boston, MA, area. PREDICTOR: Depressive symptoms, defined as a Center for Epidemiological Studies Depression Scale (CES-D) score ≥16. OUTCOME: Performance on a detailed neurocognitive battery. RESULTS: Mean age was 63.8 years, 49.0% were women, 21.6% were African American, and median dialysis therapy duration was 13.8 months. There were 57 (23.7%) participants with significant depressive symptoms. In multivariable analysis adjusting for age, sex, education, and other comorbid conditions, participants with and without depressive symptoms performed similarly on the Mini-Mental State Examination (P = 0.4) and tests of memory. However, participants with greater depressive symptoms performed significantly worse on tests assessing processing speed, attention, and executive function, including Trail Making Test B (P = 0.02) and Digit-Symbol Coding (P = 0.01). Defining depression using a CES-D score ≥18 did not substantially change results. LIMITATIONS: Cross-sectional design, absence of brain imaging. CONCLUSIONS: Hemodialysis patients with a greater burden of depressive symptoms perform worse on tests of cognition related to processing speed and executive function. Further research is needed to assess the effects of treating depressive symptoms on cognitive performance in dialysis patients.
Asunto(s)
Trastornos del Conocimiento/epidemiología , Trastorno Depresivo/epidemiología , Fallo Renal Crónico/epidemiología , Fallo Renal Crónico/terapia , Diálisis Renal/estadística & datos numéricos , Adulto , Distribución por Edad , Anciano , Análisis de Varianza , Trastornos del Conocimiento/diagnóstico , Trastornos del Conocimiento/terapia , Comorbilidad , Estudios Transversales , Trastorno Depresivo/diagnóstico , Trastorno Depresivo/terapia , Femenino , Estudios de Seguimiento , Humanos , Incidencia , Fallo Renal Crónico/diagnóstico , Masculino , Persona de Mediana Edad , Análisis Multivariante , Pruebas Neuropsicológicas , Escalas de Valoración Psiquiátrica , Valores de Referencia , Diálisis Renal/efectos adversos , Diálisis Renal/psicología , Medición de Riesgo , Índice de Severidad de la Enfermedad , Distribución por SexoRESUMEN
OBJECTIVES: Methods for pharmacoepidemiologic studies of large-scale data repositories are established. Although clinical cohorts of older adults often contain critical information to advance our understanding of medication risk and benefit, the methods best suited to manage medication data in these samples are sometimes unclear and their degree of validation unknown. We sought to provide researchers, in the context of a clinical cohort study of delirium in older adults, with guidance on the methodological tools to use data from clinical cohorts to better understand medication risk factors and outcomes. DESIGN: Prospective cohort study. SETTING: The Successful Aging After Elective Surgery (SAGES) prospective cohort. PARTICIPANTS: A total of 560 older adults (aged ≥70 years) without dementia undergoing elective major surgery. MEASUREMENTS: Using the SAGES clinical cohort, methods used to characterize medications were identified, reviewed, analyzed, and distinguished by appropriateness and degree of validation for characterizing pharmacoepidemiologic data in smaller clinical data sets. RESULTS: Medication coding is essential; the American Hospital Formulary System, most often used in the United States, is not preferred over others. Use of equivalent dosing scales (e.g., morphine equivalents) for a single medication class (e.g., opioids) is preferred over multiclass analgesic equivalency scales. Medication aggregation from the same class (e.g., benzodiazepines) is well established; the optimal prevalence breakout for aggregation remains unclear. Validated scale(s) to combine structurally dissimilar medications (e.g., anticholinergics) should be used with caution; a lack of consensus exists regarding the optimal scale. Directed acyclic graph(s) are an accepted method to conceptualize causative frameworks when identifying potential confounders. Modeling-based strategies should be used with evidence-based, a priori variable-selection strategies. CONCLUSION: As highlighted in the SAGES cohort, the methods used to classify and analyze medication data in clinically rich cohort studies vary in the rigor by which they have been developed and validated.
Asunto(s)
Analgésicos Opioides , Analgésicos/uso terapéutico , Análisis de Datos , Conciliación de Medicamentos , Farmacoepidemiología , Proyectos de Investigación , Anciano , Analgésicos Opioides/normas , Analgésicos Opioides/uso terapéutico , Procedimientos Quirúrgicos Electivos , Femenino , Humanos , Masculino , Conciliación de Medicamentos/clasificación , Conciliación de Medicamentos/normas , Estudios Prospectivos , Estados Unidos/epidemiologíaRESUMEN
BACKGROUND: Conflicting results on associations between dietary quality and bone have been noted across populations, and this has been understudied in Puerto Ricans, a population at higher risk of osteoporosis than previously appreciated. OBJECTIVE: To compare cross-sectional associations between 3 dietary quality indices [Dietary Approaches to Stop Hypertension (DASH), Alternative Health Eating Index (AHEI-2010), and Mediterranean Diet Score (MeDS)] with bone outcomes. METHOD: Participants (n = 865-896) from the Boston Puerto Rican Osteoporosis Study (BPROS) with complete bone and dietary data were included. Indices were calculated from validated food frequency data. Bone mineral density (BMD) was measured using DXA. Associations between dietary indices (z-scores) and their individual components with BMD and osteoporosis were tested with ANCOVA and logistic regression, respectively, at the lumbar spine and femoral neck, stratified by male, premenopausal women, and postmenopausal women. RESULTS: Participants were 59.9 y ± 7.6 y and mostly female (71%). Among postmenopausal women not taking estrogen, DASH (score: 11-38) was associated with higher trochanter (0.026 ± 0.006 g/cm2, P <0.001), femoral neck (0.022 ± 0.006 g/cm2, P <0.001), total hip (0.029 ± 0.006 g/cm2, P <0.001), and lumbar spine BMD (0.025 ± 0.007 g/cm2, P = 0.001). AHEI (score: 25-86) was also associated with spine and all hip sites (P <0.02), whereas MeDS (0-9) was associated only with total hip (P = 0.01) and trochanter BMD (P = 0.007) in postmenopausal women. All indices were associated with a lower likelihood of osteoporosis (OR from 0.54 to 0.75). None of the results were significant for men or premenopausal women. CONCLUSIONS: Although all appeared protective, DASH was more positively associated with BMD than AHEI or MeDS in postmenopausal women not taking estrogen. Methodological differences across scores suggest that a bone-specific index that builds on existing indices and that can be used to address dietary differences across cultural and ethnic minority populations should be considered.
Asunto(s)
Osteoporosis Posmenopáusica/dietoterapia , Anciano , Densidad Ósea , Boston/etnología , Estudios Transversales , Dieta Saludable , Dieta Mediterránea , Enfoques Dietéticos para Detener la Hipertensión , Femenino , Hispánicos o Latinos/estadística & datos numéricos , Humanos , Masculino , Persona de Mediana Edad , Osteoporosis Posmenopáusica/etnología , Osteoporosis Posmenopáusica/metabolismo , Osteoporosis Posmenopáusica/fisiopatología , Posmenopausia/metabolismoRESUMEN
BACKGROUND: Albuminuria, a kidney marker of microvascular disease, may herald microvascular disease elsewhere, including in the brain. STUDY DESIGN: Cross sectional. SETTING & PARTICIPANTS: Boston, MA, elders receiving home health services to maintain independent living who consented to brain magnetic resonance imaging. PREDICTOR: Urine albumin-creatinine ratio (ACR). OUTCOME: Performance on a cognitive battery assessing executive function and memory by using principal components analysis and white matter hyperintensity volume on brain imaging, evaluated in logistic and linear regression models. RESULTS: In 335 participants, mean age was 73.4 +/- 8.1 years and 123 participants had microalbuminuria or macroalbuminuria. Each doubling of ACR was associated with worse executive function (beta = -0.05; P = 0.005 in univariate and beta = -0.07; P = 0.004 in multivariable analyses controlling for age, sex, race, education, diabetes, cardiovascular disease, hypertension, medications, and estimated glomerular filtration rate [eGFR]), but not with worse memory or working memory. Individuals with microalbuminuria or macroalbuminuria were more likely to be in the lower versus the highest tertile of executive functioning (odds ratio, 1.18; 95% confidence interval, 1.06 to 1.32; odds ratio, 1.19; 95% confidence interval, 1.05 to 1.35 per doubling of ACR in univariate and multivariable analyses, respectively). Albuminuria was associated with qualitative white matter hyperintensity grade (odds ratio, 1.13; 95% confidence interval, 1.02 to 1.25; odds ratio, 1.15; 95% confidence interval, 1.02 to 1.29 per doubling of ACR) in univariate and multivariable analyses and with quantitative white matter hyperintensity volume (beta = 0.11; P = 0.007; beta = 0.10; P = 0.01) in univariate and multivariable analyses of log-transformed data. Results were similar when excluding individuals with macroalbuminuria. LIMITATIONS: Single measurement of ACR, indirect creatinine calibration, and reliance on participant recall for elements of medical history. CONCLUSIONS: Albuminuria is associated with worse cognitive performance, particularly in executive functioning, as well as increased white matter hyperintensity volume. Albuminuria likely identifies greater brain microvascular disease burden.
Asunto(s)
Albuminuria/complicaciones , Trastornos del Conocimiento/etiología , Personas Imposibilitadas , Anciano , Estudios Transversales , Femenino , Humanos , MasculinoRESUMEN
BACKGROUND: The proliferation of efforts to assess physician performance underscore the need to improve the reliability of physician-level quality measures. OBJECTIVE: Using diabetes care as a model, to address 2 key issues in creating reliable physician-level quality performance scores: estimating the physician effect on quality and creating composite measures. DESIGN: Retrospective longitudinal observational study. SUBJECTS: A national sample of physicians (n = 210) their patients with diabetes (n = 7574) participating in the National Committee on Quality Assurance-American Diabetes Association's Diabetes Provider Recognition Program. MEASURES: Using 11 diabetes process and intermediate outcome quality measures abstracted from the medical records of participants, we tested each measure for the magnitude of physician-level variation (the physician effect or "thumbprint"). We then combined measures with a substantial physician effect into a composite, physician-level diabetes quality score and tested its reliability. RESULTS: We identified the lowest target values for each outcome measure for which there was a recognizable "physician thumbprint" (ie, intraclass correlation coefficient > or =0.30) to create a composite performance score. The internal consistency reliability (Cronbach's alpha) of the composite score, created by combining the process and outcome measures with an intraclass correlation coefficient > or =0.30, exceeded 0.80. The standard errors of the composite case-mix adjusted score were sufficiently small to discriminate those physicians scoring in the highest from those scoring in the lowest quartiles of the quality of care distribution with no overlap. CONCLUSIONS: We conclude that the aggregation of well-tested quality measures that maximize the "physician effect" into a composite measure yields reliable physician-level quality of care scores for patients with diabetes.
Asunto(s)
Pautas de la Práctica en Medicina/normas , Garantía de la Calidad de Atención de Salud/organización & administración , Indicadores de Calidad de la Atención de Salud/normas , Reproducibilidad de los Resultados , Adulto , Anciano , Diabetes Mellitus/fisiopatología , Diabetes Mellitus/terapia , Femenino , Humanos , Estudios Longitudinales , Masculino , Auditoría Médica , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud/normas , Estudios Retrospectivos , Gestión de la Calidad Total/métodosRESUMEN
Uric acid may mediate aspects of the relationship between hypertension and kidney disease via renal vasoconstriction and systemic hypertension. To investigate the relationship between uric acid and subsequent reduced kidney function, limited-access data of 13,338 participants with intact kidney function in two community-based cohorts, the Atherosclerosis Risks in Communities and the Cardiovascular Health Study, were pooled. Mean baseline serum uric acid was 5.9 +/- 1.5 mg/dl, mean baseline serum creatinine was 0.9 +/- 0.2 mg/dl, and mean baseline estimated GFR was 90.4 +/- 19.4 ml/min/1.73 m(2). During 8.5 +/- 0.9 yr of follow-up, 712 (5.6%) had incident kidney disease defined by GFR decrease (>or=15 ml/min/1.73 m(2) with final GFR <60 ml/min/1.73 m(2)), while 302 (2.3%) individuals had incident kidney disease defined by creatinine increase (>or=0.4 mg/dl with final serum creatinine >1.4 mg/dl in men and 1.2 mg/dl in women). In GFR- and creatinine-based logistic regression models, baseline uric acid level was associated with increased risk for incident kidney disease (odds ratio 1.07 [95% confidence interval 1.01 to 1.14] and 1.11 [95% confidence interval 1.02 to 1.21] per 1-mg/dl increase in uric acid, respectively), after adjustment for age, gender, race, diabetes, systolic BP, hypertension, cardiovascular disease, left ventricular hypertrophy, smoking, alcohol use, education, lipids, albumin, hematocrit, baseline kidney function and cohort; therefore, elevated serum uric acid level is a modest, independent risk factor for incident kidney disease in the general population.