RESUMO
OBJECTIVES: Machine learning algorithms can outperform older methods in predicting clinical deterioration, but rigorous prospective data on their real-world efficacy are limited. We hypothesized that real-time machine learning generated alerts sent directly to front-line providers would reduce escalations. DESIGN: Single-center prospective pragmatic nonrandomized clustered clinical trial. SETTING: Academic tertiary care medical center. PATIENTS: Adult patients admitted to four medical-surgical units. Assignment to intervention or control arms was determined by initial unit admission. INTERVENTIONS: Real-time alerts stratified according to predicted likelihood of deterioration sent either to the primary team or directly to the rapid response team (RRT). Clinical care and interventions were at the providers' discretion. For the control units, alerts were generated but not sent, and standard RRT activation criteria were used. MEASUREMENTS AND MAIN RESULTS: The primary outcome was the rate of escalation per 1000 patient bed days. Secondary outcomes included the frequency of orders for fluids, medications, and diagnostic tests, and combined in-hospital and 30-day mortality. Propensity score modeling with stabilized inverse probability of treatment weight (IPTW) was used to account for differences between groups. Data from 2740 patients enrolled between July 2019 and March 2020 were analyzed (1488 intervention, 1252 control). Average age was 66.3 years and 1428 participants (52%) were female. The rate of escalation was 12.3 vs. 11.3 per 1000 patient bed days (difference, 1.0; 95% CI, -2.8 to 4.7) and IPTW adjusted incidence rate ratio 1.43 (95% CI, 1.16-1.78; p < 0.001). Patients in the intervention group were more likely to receive cardiovascular medication orders (16.1% vs. 11.3%; 4.7%; 95% CI, 2.1-7.4%) and IPTW adjusted relative risk (RR) (1.74; 95% CI, 1.39-2.18; p < 0.001). Combined in-hospital and 30-day-mortality was lower in the intervention group (7% vs. 9.3%; -2.4%; 95% CI, -4.5% to -0.2%) and IPTW adjusted RR (0.76; 95% CI, 0.58-0.99; p = 0.045). CONCLUSIONS: Real-time machine learning alerts do not reduce the rate of escalation but may reduce mortality.
Assuntos
Deterioração Clínica , Aprendizado de Máquina , Humanos , Feminino , Masculino , Estudos Prospectivos , Pessoa de Meia-Idade , Idoso , Equipe de Respostas Rápidas de Hospitais/organização & administração , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Mortalidade HospitalarRESUMO
OBJECTIVE: Self-rated health (SRH) is a predictor for poor health outcomes and cognition. Older adults with type 2 diabetes mellitus (T2D) have multi-morbidity and greater cognitive impairment. In the present study we investigated the association of SRH with cognitive decline and brain pathology in older adults with T2D. METHODS: Participants (n = 1122) were from the Israel Diabetes and Cognitive Decline study, and SRH was categorised as low (n = 202), moderate (n = 400) or high (n = 520). Cognition was measured by four cognitive domains: episodic memory, executive functions, language, and attention/working memory. Global cognition was the average of the cognitive domains. Statistical models adjusted for sociodemographic, cardiovascular, and clinical variables. In a randomly selected subsample (n = 230) that had magnetic resonance imaging, we examined relationships between baseline SRH and brain characteristics (white matter hyperintensities [WMHs], hippocampal, and total grey matter [GM] volumes). RESULTS: Low SRH was associated with a decline in executive functions, which accelerated over time when compared to high SRH (est = -0.0036; p = <0.001). Compared to high SRH, low SRH was associated with a faster decline in global cognition (est = -0.0024; p = 0.009). Low SRH at baseline was associated with higher volumes of WMHs (est = 9.8420; p < 0.0008). SRH was not associated with other cognitive domains, or with hippocampal and total GM. CONCLUSIONS: Low SRH is associated with cognitive decline in T2D older adults and may serve as a risk assessment. WMHs may represent an underlying mechanism.
Assuntos
Disfunção Cognitiva , Diabetes Mellitus Tipo 2 , Doenças Vasculares , Humanos , Idoso , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/patologia , Disfunção Cognitiva/etiologia , Disfunção Cognitiva/complicações , Encéfalo/patologia , Cognição , Doenças Vasculares/patologia , Imageamento por Ressonância MagnéticaRESUMO
BACKGROUND: Neuraxial opioids are commonly used after cesarean delivery (CD). However, they are not commonly used after vaginal delivery (VD) though some studies have suggested they may be beneficial from a pain perspective. However, they did not evaluate other potential benefits including patient satisfaction, impact on postpartum depression and breastfeeding (BF) success, or side effects such as pruritus. METHODS: Parturients who delivered vaginally with epidural analgesia were randomized to receive either 2 mg of preservative-free morphine (4 mL) or saline (4 mL) via the epidural catheter within 1 hour of VD. Routine analgesics were unchanged and included q 6-hour dosing of acetaminophen 975 mg orally and ketorolac 30 mg intravenous (IV). Hydromorphone 2 mg or oxycodone 10 mg were offered for breakthrough pain. Our primary outcome was opiate consumption in the first 24 hours after drug administration. Secondary outcomes included pain scores at 24 hours and 1 week postpartum as well as opiate consumption up to 1 week postpartum. Additional end points such as obstetric quality of recovery score (OBS-QOR10) breast feeding success, and an Edinburgh Postnatal Depression Score (EPDS) were also obtained. RESULTS: Data were analyzed for 157 parturients, 80 in the morphine group and 77 in the saline group. No difference was observed in the EDPS score predelivery or intention to BF. We found a statistically significant difference in the use of opioids in the first 24 hours, 3.8% (95% confidence interval [CI], 0.9%-11.3%) vs 14.3% (7.7%-24.5%) in the morphine and saline groups, respectively; and in total opioid dose, median (interquartile range, IQR [range]) of morphine milligram equivalent vs 0 (0-0 [0-47.5]) vs 0 (0-0 [0-72]), P = .023, in the morphine and saline groups, respectively. Verbal pain scores (0-10) at 24 hours were lower in the morphine group (median (IQR [range): 2.0 (1-4 [0-10]) vs 3.0 (1.5-5.0 [0-10]), P = .043. There was a greater incidence of pruritus in the morphine group versus saline group, 37.5% (95% CI, 27.1%-49.1%) vs 18.2% (95% CI, 10.6%-29.0%), P = .008. We did not find any differences in the OBS-QOR10, BF success, or EPDS at 6 weeks PP (P < .05). CONCLUSIONS: A single epidural dose of 2 mg preservative-free morphine after VD was effective at decreasing pain and opioid use at 24 hours after VD but came at the cost of increased pruritus. We did not detect any differences in BF, recovery scores, or PPD. Future studies should focus on elucidating the role of neuraxial preservative-free morphine after VD.
RESUMO
OBJECTIVES: Determine the effect of low-dose pregabalin in the perioperative enhanced recovery after cardiac surgery protocol. DESIGN: Pre-post observational study. SETTING: Tertiary care hospital. PARTICIPANTS: Patients undergoing off-pump coronary artery bypass graft procedures. INTERVENTIONS: Pregabalin 75 mg BID for 48 hours postoperatively versus no pregabalin in a perioperative setting. MEASUREMENTS AND MAIN RESULTS: Perioperative opioid use, pain scores, length of stay, time to extubation, and mortality were all measured. Descriptive data were presented as mean (SD), median (IQR), or N (%). Ordinal and continuous data used the t-test or Kruskal-Wallis test. Categorical data were compared between groups using the chi-square test or Fisher's exact test, as appropriate. Low-dose pregabalin administration (75 mg twice daily for 48 hours after surgery) was associated with a clinically significant reduction in opioid consumption on postoperative day 0 by 30.6%, with a median requirement of 318 (233, 397) morphine milligram equivalents (MME) in the pregabalin group compared with 458 (375, 526) MME in the control group (p < 0.001). There was no significant difference in pain scores between the groups with the exception at 0-to-12 hours, during which the pregabalin group had greater pain scores (median 3.32 [1.65, 4.36] v 2.0 [0, 3.25], p = 0.013) (Table 3). Moreover, there was no significant difference in pain scores on postoperative day 1 (p = 0.492), day 2 (p = 0.442), day 3 (p = 0.237), and day 4 (p = 0.649). The difference in average Richmond Agitation Sedation Score scores was also not statistically significant between groups at 12 hours (p = 0.954) and at 24 hours (p = 0.301). The pregabalin group had no increased incidence of adverse events or any significant differences in intensive care unit length of stay, time to extubation, or mortality. CONCLUSIONS: In this evaluation of perioperative pregabalin administration for patients requiring cardiac surgery, pregabalin reduced postoperative opioid use, with significant reductions on postoperative day 0, and without any significant increase in adverse reactions. However, no differences in intensive care unit length of stay, time to extubation, or mortality were noted. The implementation of low-dose perioperative pregabalin within an Enhanced Recovery After Cardiac Surgery protocol may be effective at reducing postoperative opioid use in the immediate postoperative period, and may be safe with regard to adverse events. Ideal dosing strategies have not been determined; thus, further randomized control trials with an emphasis on limiting confounding factors need to be conducted.
Assuntos
Analgésicos Opioides , Ponte de Artéria Coronária sem Circulação Extracorpórea , Humanos , Ponte de Artéria Coronária sem Circulação Extracorpórea/efeitos adversos , Dor Pós-Operatória/diagnóstico , Dor Pós-Operatória/tratamento farmacológico , Dor Pós-Operatória/prevenção & controle , PregabalinaRESUMO
COVID-19 led to unprecedented lockdowns and changes in older adults' lives, especially those with type 2 diabetes who have high risk of complications and mortality. We investigated the associations of cognitive and motor function and gray matter volumes (GMVs) with COVID-19 lockdown-related emotional distress of type 2 diabetes older adults, participating in the Israel Diabetes and Cognitive Decline Study. We administered a questionnaire to obtain information about anxiety, depression, general well-being, and optimism during a mandated lockdown. Lower grip strength before lockdown was associated with increased sadness, anxiety, and less optimism. Slower gait speed was associated with greater sadness. Lower GMV was related to greater anxiety during the lockdown when compared with anxiety levels before the COVID-19 outbreak. Yet, global cognition was not associated with any emotional distress measure. These results support the role of good motor function on emotional well-being during acute stress and GMV as a potential underlying mechanism.
Assuntos
COVID-19 , Diabetes Mellitus Tipo 2 , Angústia Psicológica , Humanos , Idoso , Quarentena/psicologia , SARS-CoV-2 , Depressão/psicologia , Controle de Doenças Transmissíveis , Ansiedade/psicologia , EncéfaloRESUMO
OBJECTIVES: To describe the trend in plasma renin activity over time in patients undergoing cardiac surgery on cardiopulmonary bypass, and to investigate if increased plasma renin activity is associated with postcardiopulmonary bypass vasoplegia. DESIGN: A prospective cohort study. SETTING: Patients were enrolled from June 2020 to May 2021 at a tertiary cardiac surgical institution. PATIENTS: A cohort of 100 adult patients undergoing cardiac surgery on cardiopulmonary bypass. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Plasma renin activity was measured at 5 time points: baseline, postoperatively, and at midnight on postoperative days 1, 2, and 3. Plasma renin activity and delta plasma renin activity were correlated with the incidence of vasoplegia and clinical outcomes. The median plasma renin activity increased approximately 3 times from baseline immediately after cardiac surgery, remained elevated on postoperative days 0, 1, and 2, and began to downtrend on postoperative day 3. Plasma renin activity was approximately 3 times higher at all measured time points in patients who developed vasoplegia versus those who did not. CONCLUSIONS: In patients undergoing cardiac surgery on cardiopulmonary bypass, plasma renin activity increased postoperatively and remained elevated through postoperative day 2. Additionally, patients with vasoplegic syndrome after cardiac surgery on cardiopulmonary bypass had more robust elevations in plasma renin activity than nonvasoplegic patients. These findings support the need for randomized controlled trials to determine if patients undergoing cardiac surgery with high plasma renin activity may benefit from targeted treatment with therapies such as synthetic angiotensin II.
Assuntos
Procedimentos Cirúrgicos Cardíacos , Vasoplegia , Adulto , Humanos , Vasoplegia/epidemiologia , Vasoplegia/etiologia , Vasoplegia/tratamento farmacológico , Renina/uso terapêutico , Ponte Cardiopulmonar/efeitos adversos , Estudos Prospectivos , Procedimentos Cirúrgicos Cardíacos/efeitos adversosRESUMO
BACKGROUND: Circumcision is a common surgical procedure performed in pediatric male patients. Ketorolac is an effective adjunct in multimodal regimens for postoperative pain control. However, many urologists and anesthesiologists refrain from administering ketorolac due to concern for postoperative bleeding. AIMS: Compare the risk of clinically significant bleeding after circumcision with and without intraoperative ketorolac administration. METHODS: A single-center, retrospective cohort study was conducted of pediatric patients 1-18 years of age who underwent isolated circumcision by one urologist from 2016 to 2020. Clinically significant bleeding was defined as bleeding requiring intervention within the first 24 h of circumcision. Interventions included use of absorbable hemostats, placement of sutures, or return to the operating room. RESULTS: Of 743 patients, 314 (42.3%) did not receive ketorolac and 429 (57.7%) received intraoperative ketorolac 0.5 mg/kg. Postoperative bleeding requiring intervention occurred in one patient (0.32%) in the non-ketorolac group versus four patients (0.93%) in the ketorolac group (difference 0.6%, 95% CI [-0.8%, 2.0%], p = 0.403). CONCLUSIONS: There was no statistically significant difference in postoperative bleeding requiring intervention between the non-ketorolac and ketorolac groups. Future studies regarding the association between ketorolac and postoperative bleeding are needed.
Assuntos
Anti-Inflamatórios não Esteroides , Cetorolaco , Criança , Humanos , Masculino , Cetorolaco/uso terapêutico , Anti-Inflamatórios não Esteroides/uso terapêutico , Estudos Retrospectivos , Dor Pós-Operatória/tratamento farmacológico , Hemorragia Pós-Operatória/induzido quimicamente , Hemorragia Pós-Operatória/epidemiologiaRESUMO
BACKGROUND: A multidisciplinary Quality Assurance/Performance Improvement study to identify the incidence of "heparin rebound" in our adult cardiac surgical population instead detected a thromboelastometry pattern suggestive of initial protamine overdose in 34% despite Hepcon-guided anticoagulation management. Analysis of our practice led to an intervention that made an additional lower-range Hepcon cartridge available to the perfusionists. METHODS: One year later, an IRB-approved retrospective study was conducted in >500 patients to analyze the effects of the intervention, specifically focusing on the impact of the initial protamine dose accuracy and 18-h mediastinal chest tube drainage (MCTd). RESULTS: No differences were observed between group demographics, surgical procedures, duration of CPB or perioperative blood product transfusion. Both groups were managed using the same perfusion and anesthesia equipment, strategies, and protocols. The median initial protamine dose decreased by 19% (p < .001) in the intervention group (170 [IQR 140-220] mg; n = 295) versus the control group (210 [180-250] mg; n = 257). Mean 18-h MCTd decreased by 13% (p < .001) in the intervention group (405.15 ± 231.54 mL; n = 295) versus the control group (466.13 ± 286.73 mL; n = 257). Covariate-adjusted mixed effects model showed a significant reduction of MCTd in the intervention group, starting from hour 11 after surgery (group by time interaction p = .002). CONCLUSION: Though previous investigators have associated lower protamine doses with less MCTd, this study demonstrates that more accurately matching the initial protamine dose to the remaining circulating heparin concentration reduces postoperative bleeding.
RESUMO
Optimal perioperative care contributes to improved patient outcomes, as demonstrated in the field of liver transplant (LT). The evolution in perioperative care over the past two decades has been driven by research in areas such as preoperative testing, coagulation management, and intraoperative monitoring. However, much of this research is driven by local institutional pressures and practices with a dearth of studies emanating from research consortia or other groups of experts within the field. To better characterize the top research questions in the field, we queried a group of 128 LT anesthesiologists representing 87 international liver transplant centers with a response from 71 practitioners (59.2%). Three experts then codified the responses into the top 20 questions, which were sent to the survey recipients as a second survey to rank order. Seventy-five respondents (61.5%) provided responses, which were merged into a weighted ranked priority list and analyzed by respondent location and center size. The highest ranked question was, "What intraoperative anesthetic management/interventions affect graft outcome?" Most of the top research questions focused on preoperative risk factor management or optimization and intraoperative management techniques. In general, this priorities list may serve as a guide for transplant anesthesiology researchers to focus future research endeavors on shared interests that improve patient care.
Assuntos
Anestesiologia , Transplante de Fígado , Anestesiologistas , Anestesiologia/métodos , Humanos , Pesquisa , Inquéritos e QuestionáriosRESUMO
BACKGROUND: Depression is highly prevalent among Haemodialysis (HD) patients and is known to results in a series of adverse outcomes and poor quality of life (QoL). Although cognitive behavioural therapy (CBT) has been shown to improve depressive symptoms and QoL in other chronic illness, there is uncertainty in terms of the effectiveness of CBT in HD patients with depression or depressive symptoms. METHODS: All randomised controlled trials relevant to the topic were retrieved from the following databases: CINHAL, MEDLINE, PubMed, PsycINFO and CENTRAL. The grey literature, specific journals, reference lists of included studies and trials registers website were also searched. Data was extracted or calculated from included studies that had measured depression and quality of life using valid and reliable tools -this included mean differences or standardised mean differences and 95% confidence intervals. The Cochrane risk of bias tool was used to identify the methodological quality of the included studies. RESULTS: Six RCTs were included with varying methodological quality. Meta-analysis was undertaken for 3 studies that employed the CBT versus usual care. All studies showed that the depressive symptoms significantly improved after the CBT. Furthermore, CBT was more effective than usual care (MD = - 5.28, 95%CI - 7.9 to - 2.65, P = 0.37) and counselling (MD = - 2.39, 95%CI - 3.49 to - 1.29), while less effective than sertraline (MD = 2.2, 95%CI 0.43 to 3.97) in alleviating depressive symptoms. Additionally, the CBT seems to have a beneficial effect in improving QoL when compared with usual care, while no significant difference was found in QoL score when compared CBT with sertraline. CONCLUSIONS: CBT may improve depressive symptoms and QoL in HD patients with comorbid depressive symptoms. However, more rigorous studies are needed in this field due to the small quantity and varied methodological quality in the identified studies.
Assuntos
Terapia Cognitivo-Comportamental , Qualidade de Vida , Depressão/terapia , Humanos , Manutenção , Diálise RenalRESUMO
STUDY OBJECTIVE: To determine the association of practitioner dashboard feedback of intraoperative glycemic and temperature control on maintenance of normoglycemia and normothermia. DESIGN: Retrospective review. SETTING: Single tertiary care institution. PATIENTS: Patients over the age of 18 undergoing cardiac surgery from February 17, 2021 through February 16, 2023. During the study interval, 15 anesthesiologists providing care during 2255 procedures were analyzed: 1114 prior to the individual faculty dashboard distribution and 1141 after commencement of dashboard distribution. INTERVENTIONS: On February 17, 2022, anesthesia faculty members began receiving monthly individualized dashboards indicating their personal intraoperative glycemic and temperature compliance rates. MEASUREMENTS: Baseline patient demographic characteristics, surgical and cardiopulmonary bypass times, perioperative temperature and glucose concentrations, and the incidence of sternal wound infections. Glycemic compliance was defined as final serum glucose between 80 and 180 mg/dL. Temperature compliance was defined as an average temperature during the final 30 min of the surgical procedure between 35 and 37.3 °C inclusive. MAIN RESULTS: Dashboard distribution was associated with a significant decrease in the average glucose concentration (median location shift by -6 mg% (95% confidence interval (CI) -8, -4), p < 0.001) from 157 mg/dL to 152 mg/dL and final glucose concentration (median location shift by -17 mg/dL (95% CI -19, -14, p < 0.001) from 161 mg/dL to 145 mg/dL. The intervention was associated with an improvement in glycemic compliance from 71.4% to 87.1% (odds ratio (OR): 2.71(95% CI 2.19, 3.37, p < 0.001)). There were no significant differences in final temperature (36.3 °C [Q1, Q3: 36.0, 36.6] vs. 36.3 °C [Q1, Q3: 36.0, 36.7] (p = 0.232)) with the intervention nor were there any statistically significant differences in temperature compliance (93.9% vs. 92.9%, OR: 0.79 (95% CI 0.55-1.14, p = 0.25). There were no statistically significant changes in the incidence of superficial, deep, or any wound infections with the intervention. CONCLUSIONS: Individualized practitioner dashboard distribution may be an effective tool to increase intraoperative glycemic control.
Assuntos
Glicemia , Temperatura Corporal , Procedimentos Cirúrgicos Cardíacos , Humanos , Estudos Retrospectivos , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Glicemia/análise , Infecção da Ferida Cirúrgica/prevenção & controle , Infecção da Ferida Cirúrgica/epidemiologia , Controle Glicêmico/métodos , Anestesiologistas/estatística & dados numéricos , Retroalimentação , Monitorização Intraoperatória/métodosRESUMO
Background/Objectives: The use of neuraxial anesthesia versus general anesthesia for hip fracture surgery remains an active area of research, with recent studies demonstrating mixed findings supporting neuraxial over general anesthesia. The benefits of neuraxial anesthesia have been documented in associated surgeries, including total joint arthroplasty. However, racial disparities in the administration of neuraxial anesthesia have been identified in numerous procedures. We aimed to examine the association of race/ethnicity with neuraxial anesthesia use and the effect of neuraxial anesthesia on length of stay, non-home discharge, 30-day severe adverse events, and rates of readmission among patients undergoing isolated hip and femoral shaft fracture operations. Methods: The American College of Surgeons National Quality Improvement Program database was queried for isolated hip or femoral shaft fractures from 2015 to 2019. Stepwise logistic regression was performed to assess the relationship between race/ethnicity and neuraxial anesthesia use. Within each sex-race stratum, neuraxial anesthesia recipients were propensity-matched to general anesthesia recipients in a 1:2 ratio. Logistic regression and negative binomial regression were performed on the propensity-matched cohort. Results: A total of 12,004 neuraxial and 64,250 general anesthesia hip and femoral shaft fracture patients were identified. Compared to White patients, Black and Hispanic patients were between 0.64 and 0.61 times less likely to receive neuraxial anesthesia over general anesthesia, respectively (p < 0.05). 11,993 patients who received neuraxial anesthesia were propensity matched to 23,946 patients who received general anesthesia. Propensity-matched logistic regressions found that neuraxial anesthesia was associated with decreased length of stay, 30-day severe adverse events, and acute rehab/skilled nursing facility discharge for White patients (p < 0.05 for all), but only decreased length of stay in Black and Hispanic patients (p = 0.01 and p = 0.02, respectively). Conclusions: Notable disparities exist in the administration of neuraxial anesthesia for isolated hip and femoral shaft fracture patients. Hispanic and Black race/ethnicity in particular influences provision of neuraxial anesthesia. Further research is required to understand the degree of effect modification and root causes of regional anesthesia access and benefits for this high-volume patient population.
RESUMO
Background: The risk of developing a persistent reduction in renal function after postoperative acute kidney injury (pAKI) is not well-established. Objective: Perform a multi-center retrospective propensity matched study evaluating whether patients that develop pAKI have a greater decline in long-term renal function than patients that did not develop postoperative AKI. Design: Multi-center retrospective propensity matched study. Setting: Anesthesia data warehouses at three tertiary care hospitals were queried. Patients: Adult patients undergoing surgery with available preoperative and postoperative creatinine results and without baseline hemodialysis requirements. Measurements: The primary outcome was a decline in follow-up glomerular filtration rate (GFR) of 40% relative to baseline, based on follow-up outpatient visits from 0-36 months after hospital discharge. A propensity score matched sample was used in Kaplan-Meier analysis and in a piecewise Cox model to compare time to first 40% decline in GFR for patients with and without pAKI. Results: A total of 95,208 patients were included. The rate of pAKI ranged from 9.9% to 13.7%. In the piecewise Cox model, pAKI significantly increased the hazard of a 40% decline in GFR. The common effect hazard ratio was 13.35 (95% CI: 10.79 to 16.51, p<0.001) for 0-6 months, 7.07 (5.52 to 9.05, p<0.001) for 6-12 months, 6.02 (4.69 to 7.74, p<0.001) for 12-24 months, and 4.32 (2.65 to 7.05, p<0.001) for 24-36 months. Limitations: Retrospective; Patients undergoing ambulatory surgery without postoperative lab tests drawn before discharge were not captured; certain variables like postoperative urine output were not reliably available. Conclusion: Postoperative AKI significantly increases the risk of a 40% decline in GFR up to 36 months after the index surgery across three institutions.
RESUMO
Malnutrition is a frequently underdiagnosed condition leading to increased morbidity, mortality, and healthcare costs. The Mount Sinai Health System (MSHS) deployed a machine learning model (MUST-Plus) to detect malnutrition upon hospital admission. However, in diverse patient groups, a poorly calibrated model may lead to misdiagnosis, exacerbating health care disparities. We explored the model's calibration across different variables and methods to improve calibration. Data from adult patients admitted to five MSHS hospitals from January 1, 2021 - December 31, 2022, were analyzed. We compared MUST-Plus prediction to the registered dietitian's formal assessment. Hierarchical calibration was assessed and compared between the recalibration sample (N = 49,562) of patients admitted between January 1, 2021 - December 31, 2022, and the hold-out sample (N = 17,278) of patients admitted between January 1, 2023 - September 30, 2023. Statistical differences in calibration metrics were tested using bootstrapping with replacement. Before recalibration, the overall model calibration intercept was -1.17 (95% CI: -1.20, -1.14), slope was 1.37 (95% CI: 1.34, 1.40), and Brier score was 0.26 (95% CI: 0.25, 0.26). Both weak and moderate measures of calibration were significantly different between White and Black patients and between male and female patients. Logistic recalibration significantly improved calibration of the model across race and gender in the hold-out sample. The original MUST-Plus model showed significant differences in calibration between White vs. Black patients. It also overestimated malnutrition in females compared to males. Logistic recalibration effectively reduced miscalibration across all patient subgroups. Continual monitoring and timely recalibration can improve model accuracy.
RESUMO
Providing adequate analgesia perioperatively during subcutaneous implantable cardioverter-defibrillator (S-ICD) implantation can be a challenge. The objective of our study was to assess the efficacy and safety of the erector spinae plane (ESP) block technique in providing analgesia and minimizing the risk of opioid use in high-risk patient populations. We enrolled consecutive patients >18 years of age undergoing S-ICD implantation from February 2020 to February 2022 at our center prospectively. Patients were randomly assigned to receive the ESP block or traditional wound infiltration. A total of 24 patients were enrolled, including 13 patients randomized to ESP block and 11 patients as controls who received only wound infiltration. The primary outcome assessed was the overall use of perioperative analgesic medications in the ESP block group versus the surgical wound infiltration group. A significant reduction in intraoperative fentanyl use was observed [median ([interquartile range]) in the ESP block group (0 [0-50] µg) compared to the wound infiltration block group (75 [50-100] µg) (P = .001). The overall postoperative day (POD) 0 fentanyl use was also significantly decreased (75 [50-100] µg) in the ESP block group compared to the surgical wound infiltration group (100 [87.5-150] µg) (P = .049). There was also a trend of decreased POD 0 oxycodone-acetaminophen use. Finally, the number of days to discharge was less in the ESP block group. These results indicate that ESP block is an innovative, safe, and effective technique that decreases intraoperative and postoperative opioid consumption and may be a useful adjunct pain-management technique in these high-risk patients. Larger studies are needed to further validate its use.
RESUMO
Introduction: Depression and its components significantly impact dementia prediction and severity, necessitating reliable objective measures for quantification. Methods: We investigated associations between emotion-based speech measures (valence, arousal, and dominance) during picture descriptions and depression dimensions derived from the geriatric depression scale (GDS, dysphoria, withdrawal-apathy-vigor (WAV), anxiety, hopelessness, and subjective memory complaint). Results: Higher WAV was associated with more negative valence (estimate = -0.133, p = 0.030). While interactions of apolipoprotein E (APOE) 4 status with depression dimensions on emotional valence did not reach significance, there was a trend for more negative valence with higher dysphoria in those with at least one APOE4 allele (estimate = -0.404, p = 0.0846). Associations were similar irrespective of dementia severity. Discussion: Our study underscores the potential utility of speech biomarkers in characterizing depression dimensions. In future research, using emotionally charged stimuli may enhance emotional measure elicitation. The role of APOE on the interaction of speech markers and depression dimensions warrants further exploration with greater sample sizes. Highlights: Participants reporting higher apathy used more negative words to describe a neutral picture.Those with higher dysphoria and at least one APOE4 allele also tended to use more negative words.Our results suggest the potential use of speech biomarkers in characterizing depression dimensions.
RESUMO
INTRODUCTION: Racial disparities exist in maternal and neonatal care including breastfeeding (BF). The purpose of this study is to assess factors associated with BF success by race with a specific focus on pre-birth BF plan and time duration from birth until initiation of skin-to-skin contact and from birth to the first feed or breastfeed. METHODS: A database query of our electronic medical records was performed for all patients who had a vaginal delivery that met our study criteria. Demographic information, pre-delivery feeding plan (exclusive BF, exclusive formula, or mixed), time to first feed and first breastfeed, and time to skin-to-skin were compared among different postpartum feeding practices (exclusive BF, exclusive formula, mixed), and compared across race/ethnic groups using ANOVA, Chi-square, and Fisher's exact statistical tests as appropriate. Logistic regression was used to investigate the independent effect of each variable on exclusive BF. RESULTS: The study analyzed 12,578 deliveries. There was a significant difference in intended feeding plans among the different racial groups. Approximately 61% of Black patients intended to exclusively BF as compared to 79% of the other groups. Overall, 3994 (32%) patients breastfed exclusively, 872 (7%) exclusively used formula, and 7712 (61%) used a mix of breast and formula. White patients were most likely to exclusively BF (35%) and Black patients were least likely (21%), p < 0.001. Our model found that self-identified race and pre-delivery feeding plan were the strongest predictors of exclusive BF. CONCLUSIONS: The main findings of this study are that self-identified race and intention to BF are the strongest predictors of exclusive BF. Black patients intend to BF at a significantly lower rate than other racial groups, for reasons not determined by this study, and this affects feeding practice. Our findings are notable because prehospital intention to BF can be modified by outreach, education, and changes to in-hospital practices.
RESUMO
BACKGROUND: We hypothesized that chronic opioid users would likely have worse outcomes with COVID-19 infection. METHODS: A retrospective review of electronic medical records was conducted for all COVID-19 patients admitted in two large academic hospitals in New York City from March 1, 2020 to June 30, 2020 during the onset of the COVID-19 pandemic. A total of 1,361 patients (1,289 opioid naïve patients, 72 with chronic opioid use) were included. A propensity score matched analysis was used to create a dataset. A logistic regression using the generalized estimating equations method was used to evaluate oxygen requirements including bilevel positive airway pressure (BiPAP), high flow nasal cannula (HFNC), and mechanical ventilation (MV). Cox models with random match pairs were fitted for time spent until hospital discharge and in-hospital mortality. RESULTS: The propensity score matched analysis did not demonstrate a significant difference between the chronic opioid use group vs the opioid naïve group for the use of oxygen support (p = 0.439), BiPAP (p = 0.377), HFNC (p = 0.978), or MV (p = 0.080), and length of stay (LOS) (p = 0.950). There was also no statistically significant finding for reduced need for MV (odds ratio 0.42, 95 percent CI: 0.16-1.11, p = 0.080) and lower in-hospital mortality (hazard ratio 0.75, 95 percent CI: 0.39-1.43, p = 0.378) in the chronic opioid use group; however, future larger studies will be needed. CONCLUSIONS: Our study did not demonstrate a significant difference in outcomes in patients with COVID-19 with preadmission chronic opioid use vs opioid naïve patients in oxygen requirements, LOS, MV, or mortality. Future studies are needed to further illustrate the impact of opioids on COVID-19 outcomes.
Assuntos
Analgésicos Opioides , COVID-19 , Humanos , Estudos Retrospectivos , Analgésicos Opioides/efeitos adversos , Pandemias , OxigênioRESUMO
STUDY OBJECTIVE: Increased regulatory requirements for sterilization in recent years have prompted a widespread transition from reusable to single-use laryngoscopes. The purpose of this study was to determine if the transition from metallic reusable to metallic single-use laryngoscopes impacted the performance of direct laryngoscopy at an academic medical center. DESIGN: Single-site retrospective cohort study. SETTING: General anesthetic cases requiring tracheal intubation. PATIENTS: Adult patients undergoing non-emergent procedures. INTERVENTIONS: Data were collected two years before and two years after a transition from metallic reusable to metallic single-use laryngoscopes. MEASUREMENTS: The primary outcome was need for intubation rescue with an alternate device. Secondary outcomes were difficult laryngeal view (modified Cormack-Lehane grade ≥ 2b) and hypoxemia (SpO2 < 90% for >30 s) during direct laryngoscopy intubations. Subgroup analyses for rapid sequence induction, Macintosh blades, Miller blades, and patients with difficult airway risk factors (Obstructive Sleep Apnea, Mallampati ≥3, Body Mass Index >30 kg/m2) were performed. MAIN RESULTS: In total, 72,672 patients were included: 35,549 (48.9%) in the reusable laryngoscope cohort and 37,123 (51.1%) in the single-use laryngoscope cohort. Compared with reusable laryngoscopes, single-use laryngoscopes were associated with fewer rescue intubations with an alternate device (covariates-adjusted odds ratio [OR] 0.81 95% CI 0.66-0.99). Single-use laryngoscopes were also associated with lower odds of difficult laryngeal view (OR 0.86; 95% CI 0.80-0.93). Single use laryngoscopes were not associated with hypoxemia during the intubation attempt (OR 1.03; 95% CI 0.88-1.20). Similar results were observed for subgroup analyses including rapid sequence induction, Macintosh blades, Miller blades, and patients with difficult airway risk factors. CONCLUSIONS: Metallic single-use laryngoscopes were associated with less need for rescue intubation with alternate devices and lower incidence of poor laryngeal view compared to reusable metallic laryngoscopes.
Assuntos
Laringoscópios , Adulto , Humanos , Laringoscópios/efeitos adversos , Estudos Retrospectivos , Laringoscopia/métodos , Intubação Intratraqueal/métodos , Hipóxia/epidemiologia , Hipóxia/etiologia , Desenho de EquipamentoRESUMO
INTRODUCTION: Serum albumin's association with liver transplant outcomes has been investigated with mixed findings. This study aimed to evaluate perioperative albumin level, independently and as part of the albumin-bilirubin (ALBI) grade, as a predictor of post-liver transplant hospital and intensive care unit (ICU) length of stay (LOS). METHODS: Adult liver-only transplant recipients at our institution from September 2011 to May 2019 were included in this retrospective study. Repeat transplants were excluded. Demographic, laboratory, and hospital course data were extracted from an institutional data warehouse. Negative binomial regression was used to assess the association of LOS with ALBI grade, age, BMI, ASA score, Elixhauser comorbidity index, MELD-Na, warm ischemia time, units of platelets and cryoprecipitate transfused, and preoperative serum albumin. RESULTS: Six hundred and sixty-three liver transplant recipients met inclusion criteria. The median preoperative serum albumin was 3.1 [2.6-3.6] g/dL. The median postoperative ICU and hospital LOS were 3.8 [2.4-6.8] and 12 [8-20] days, respectively. Preoperative serum albumin predicted hospital but not ICU LOS (ratio .9 [95% confidence interval (CI) .84-.99], P = .03, hospital LOS vs ratio .92 [95% CI 0.84-1.02], P = .10, ICU LOS). For patients with MELD-Na ≤ 20, ALBI grade-3 predicted longer hospital and ICU LOS (ratio 1.40 [95% CI 1.18-1.66], P < .001, hospital LOS vs ratio 1.62 [95% CI 1.32-1.99], P < .001, ICU LOS). These associations were not significant for patients with MELD-Na > 20. CONCLUSIONS: Serum albumin predicted post-liver transplant hospital LOS. ALBI grade-3 predicted increased hospital and ICU LOS in low MELD-Na recipients.