Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Circulation ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38860364

RESUMO

BACKGROUND: The majority of out-of-hospital cardiac arrests (OHCAs) occur among individuals in the general population, for whom there is no established strategy to identify risk. In this study, we assess the use of electronic health record (EHR) data to identify OHCA in the general population and define salient factors contributing to OHCA risk. METHODS: The analytical cohort included 2366 individuals with OHCA and 23 660 age- and sex-matched controls receiving health care at the University of Washington. Comorbidities, electrocardiographic measures, vital signs, and medication prescription were abstracted from the EHR. The primary outcome was OHCA. Secondary outcomes included shockable and nonshockable OHCA. Model performance including area under the receiver operating characteristic curve and positive predictive value were assessed and adjusted for observed rate of OHCA across the health system. RESULTS: There were significant differences in demographic characteristics, vital signs, electrocardiographic measures, comorbidities, and medication distribution between individuals with OHCA and controls. In external validation, discrimination in machine learning models (area under the receiver operating characteristic curve 0.80-0.85) was superior to a baseline model with conventional cardiovascular risk factors (area under the receiver operating characteristic curve 0.66). At a specificity threshold of 99%, correcting for baseline OHCA incidence across the health system, positive predictive value was 2.5% to 3.1% in machine learning models compared with 0.8% for the baseline model. Longer corrected QT interval, substance abuse disorder, fluid and electrolyte disorder, alcohol abuse, and higher heart rate were identified as salient predictors of OHCA risk across all machine learning models. Established cardiovascular risk factors retained predictive importance for shockable OHCA, but demographic characteristics (minority race, single marital status) and noncardiovascular comorbidities (substance abuse disorder) also contributed to risk prediction. For nonshockable OHCA, a range of salient predictors, including comorbidities, habits, vital signs, demographic characteristics, and electrocardiographic measures, were identified. CONCLUSIONS: In a population-based case-control study, machine learning models incorporating readily available EHR data showed reasonable discrimination and risk enrichment for OHCA in the general population. Salient factors associated with OCHA risk were myriad across the cardiovascular and noncardiovascular spectrum. Public health and tailored strategies for OHCA prediction and prevention will require incorporation of this complexity.

2.
Medicina (Kaunas) ; 60(2)2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38399591

RESUMO

Background and Objectives: We analyzed delirium testing, delirium prevalence, critical care associations outcomes at the time of hospital discharge in patients with acute brain injury (ABI) due to acute ischemic stroke (AIS), non-traumatic subarachnoid hemorrhage (SAH), non-traumatic intraparenchymal hemorrhage (IPH), and traumatic brain injury (TBI) admitted to an intensive care unit. Materials and Methods: We examined the frequency of assessment for delirium using the Confusion Assessment Method for the intensive care unit. We assessed delirium testing frequency, associated factors, positive test outcomes, and their correlations with clinical care, including nonpharmacological interventions and pain, agitation, and distress management. Results: Amongst 11,322 patients with ABI, delirium was tested in 8220 (726%). Compared to patients 18-44 years of age, patients 65-79 years (aOR 0.79 [0.69, 0.90]), and those 80 years and older (aOR 0.58 [0.50, 0.68]) were less likely to undergo delirium testing. Compared to English-speaking patients, non-English-speaking patients (aOR 0.73 [0.64, 0.84]) were less likely to undergo delirium testing. Amongst 8220, 2217 (27.2%) tested positive for delirium. For every day in the ICU, the odds of testing positive for delirium increased by 1.11 [0.10, 0.12]. Delirium was highest in those 80 years and older (aOR 3.18 [2.59, 3.90]). Delirium was associated with critical care resource utilization and with significant odds of mortality (aOR 7.26 [6.07, 8.70] at the time of hospital discharge. Conclusions: In conclusion, we find that seven out of ten patients in the neurocritical care unit are tested for delirium, and approximately two out of every five patients test positive for delirium. We demonstrate disparities in delirium testing by age and preferred language, identified high-risk subgroups, and the association between delirium, critical care resource use, complications, discharge GCS, and disposition. Prioritizing equitable testing and diagnosis, especially for elderly and non-English-speaking patients, is crucial for delivering quality care to this vulnerable group.


Assuntos
Lesões Encefálicas , Delírio , AVC Isquêmico , Humanos , Idoso , Delírio/diagnóstico , Delírio/epidemiologia , Delírio/etiologia , Alta do Paciente , AVC Isquêmico/complicações , Cuidados Críticos , Unidades de Terapia Intensiva , Lesões Encefálicas/complicações , Hospitais
3.
BMC Anesthesiol ; 23(1): 296, 2023 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-37667258

RESUMO

BACKGROUND: Electronic health records (EHR) contain large volumes of unstructured free-form text notes that richly describe a patient's health and medical comorbidities. It is unclear if perioperative risk stratification can be performed directly from these notes without manual data extraction. We conduct a feasibility study using natural language processing (NLP) to predict the American Society of Anesthesiologists Physical Status Classification (ASA-PS) as a surrogate measure for perioperative risk. We explore prediction performance using four different model types and compare the use of different note sections versus the whole note. We use Shapley values to explain model predictions and analyze disagreement between model and human anesthesiologist predictions. METHODS: Single-center retrospective cohort analysis of EHR notes from patients undergoing procedures with anesthesia care spanning all procedural specialties during a 5 year period who were not assigned ASA VI and also had a preoperative evaluation note filed within 90 days prior to the procedure. NLP models were trained for each combination of 4 models and 8 text snippets from notes. Model performance was compared using area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Shapley values were used to explain model predictions. Error analysis and model explanation using Shapley values was conducted for the best performing model. RESULTS: Final dataset includes 38,566 patients undergoing 61,503 procedures with anesthesia care. Prevalence of ASA-PS was 8.81% for ASA I, 31.4% for ASA II, 43.25% for ASA III, and 16.54% for ASA IV-V. The best performing models were the BioClinicalBERT model on the truncated note task (macro-average AUROC 0.845) and the fastText model on the full note task (macro-average AUROC 0.865). Shapley values reveal human-interpretable model predictions. Error analysis reveals that some original ASA-PS assignments may be incorrect and the model is making a reasonable prediction in these cases. CONCLUSIONS: Text classification models can accurately predict a patient's illness severity using only free-form text descriptions of patients without any manual data extraction. They can be an additional patient safety tool in the perioperative setting and reduce manual chart review for medical billing. Shapley feature attributions produce explanations that logically support model predictions and are understandable to clinicians.


Assuntos
Anestesia , Anestesiologistas , Humanos , Processamento de Linguagem Natural , Estudos Retrospectivos , Estados Unidos
4.
J Cardiothorac Vasc Anesth ; 37(3): 374-381, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36528501

RESUMO

OBJECTIVES: The clinical significance of hypophosphatemia in cardiac surgery has not been investigated extensively. The aim of this study was to evaluate the association of postoperative hypophosphatemia and lactic acidosis in cardiac surgery patients at the time of intensive care unit (ICU) admission. DESIGN: A retrospective cohort study. SETTING: At a single academic center. PARTICIPANTS: Patients who underwent nontransplant cardiac surgery with cardiopulmonary bypass between August 2009 and December 2020. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Serum phosphate and lactate levels were measured upon ICU admission in patients undergoing nontransplant cardiac surgery with cardiopulmonary bypass. There were 681 patients in the low-phosphate (<2.5 mg/dL) group and 2,579 patients in the normal phosphate group (2.5-4.5 mg/dL). A higher proportion of patients in the low phosphate group (26%; 179 of 681; 95% CI: 23-30) had severe lactic acidosis compared to patients in the normal phosphate group (16%; 417 of 2,579; 95% CI: 15-18). In an unadjusted logistic regression model, patients in the low phosphate group had 1.9-times the odds of having severe lactic acidosis (serum lactate ≥4.0 mmol/L) when compared to patients in the normal phosphate group (95% CI: 1.5-2.3), and still 1.4-times the odds (95% CI: 1.1-1.7) after adjusting for several possible confounders. CONCLUSIONS: Hypophosphatemia is associated with lactic acidosis in the immediate postoperative period in cardiac surgery patients. Future studies will need to investigate it as a potential treatment target for lactic acidosis.


Assuntos
Acidose Láctica , Procedimentos Cirúrgicos Cardíacos , Hipofosfatemia , Humanos , Acidose Láctica/diagnóstico , Acidose Láctica/epidemiologia , Acidose Láctica/etiologia , Estudos Retrospectivos , Ponte Cardiopulmonar/efeitos adversos , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Hipofosfatemia/diagnóstico , Hipofosfatemia/epidemiologia , Hipofosfatemia/etiologia , Fosfatos , Lactatos
5.
Stroke ; 53(3): 904-912, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34732071

RESUMO

BACKGROUND: Inhalational anesthetics were associated with reduced incidence of angiographic vasospasm and delayed cerebral ischemia (DCI) in patients with aneurysmal subarachnoid hemorrhage (SAH). Whether intravenous anesthetics provide similar level of protection is not known. METHODS: Anesthetic data were collected retrospectively for patients with SAH who received general anesthesia for aneurysm repair between January 1, 2014 and May 31, 2018, at 2 academic centers in the United States (one employing primarily inhalational and the other primarily intravenous anesthesia with propofol). We compared the outcomes of angiographic vasospasm, DCI, and neurological outcome (measured by disposition at hospital discharge), between the 2 sites, adjusting for potential confounders. RESULTS: We compared 179 patients with SAH receiving inhalational anesthetics at one institution to 206 patients with SAH receiving intravenous anesthetics at the second institution. The rates of angiographic vasospasm between inhalational versus intravenous anesthetic groups were 32% versus 52% (odds ratio, 0.49 [CI, 0.32-0.75]; P=0.001) and DCI were 21% versus 40% (odds ratio, 0.47 [CI, 0.29-0.74]; P=0.001), adjusting for imbalances between sites/groups, Hunt-Hess and Fisher grades, type of aneurysm treatment, and American Society of Anesthesiology status. No impact of anesthetics on neurological outcome at time of discharge was noted with rates of good discharge outcome between inhalational versus intravenous anesthetic groups at (78% versus 72%, P=0.23). CONCLUSIONS: Our data suggest that those who received inhalational versus intravenous anesthetic for ruptured aneurysm repair had significant protection against SAH-induced angiographic vasospasm and DCI. Although we cannot fully disentangle site-specific versus anesthetic effects in this comparative study, these results, when coupled with preclinical data demonstrating a similar protective effect of inhalational anesthetics on vasospasm and DCI, suggest that inhalational anesthetics may be preferable for patients with SAH undergoing aneurysm repair. Additional investigations examining the effect of inhalational anesthetics on other SAH outcomes such as early brain injury and long-term neurological outcomes are warranted.


Assuntos
Anestésicos Intravenosos/uso terapêutico , Isquemia Encefálica/prevenção & controle , Propofol/uso terapêutico , Hemorragia Subaracnóidea/complicações , Adulto , Idoso , Anestésicos Intravenosos/administração & dosagem , Isquemia Encefálica/diagnóstico por imagem , Isquemia Encefálica/etiologia , Angiografia Cerebral , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Propofol/administração & dosagem , Estudos Retrospectivos , Hemorragia Subaracnóidea/diagnóstico por imagem
6.
Br J Anaesth ; 128(4): 623-635, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34924175

RESUMO

BACKGROUND: Postoperative hypotension is associated with adverse outcomes, but intraoperative prediction of postanaesthesia care unit (PACU) hypotension is not routine in anaesthesiology workflow. Although machine learning models may support clinician prediction of PACU hypotension, clinician acceptance of prediction models is poorly understood. METHODS: We developed a clinically informed gradient boosting machine learning model using preoperative and intraoperative data from 88 446 surgical patients from 2015 to 2019. Nine anaesthesiologists each made 192 predictions of PACU hypotension using a web-based visualisation tool with and without input from the machine learning model. Questionnaires and interviews were analysed using thematic content analysis for model acceptance by anaesthesiologists. RESULTS: The model predicted PACU hypotension in 17 029 patients (area under the receiver operating characteristic [AUROC] 0.82 [95% confidence interval {CI}: 0.81-0.83] and average precision 0.40 [95% CI: 0.38-0.42]). On a random representative subset of 192 cases, anaesthesiologist performance improved from AUROC 0.67 (95% CI: 0.60-0.73) to AUROC 0.74 (95% CI: 0.68-0.79) with model predictions and information on risk factors. Anaesthesiologists perceived more value and expressed trust in the prediction model for prospective planning, informing PACU handoffs, and drawing attention to unexpected cases of PACU hypotension, but they doubted the model when predictions and associated features were not aligned with clinical judgement. Anaesthesiologists expressed interest in patient-specific thresholds for defining and treating postoperative hypotension. CONCLUSIONS: The ability of anaesthesiologists to predict PACU hypotension was improved by exposure to machine learning model predictions. Clinicians acknowledged value and trust in machine learning technology. Increasing familiarity with clinical use of model predictions is needed for effective integration into perioperative workflows.


Assuntos
Hipotensão , Complicações Pós-Operatórias , Humanos , Hipotensão/diagnóstico , Hipotensão/etiologia , Aprendizado de Máquina , Estudos Prospectivos , Curva ROC
7.
Anesth Analg ; 135(5): 957-966, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-35417420

RESUMO

BACKGROUND: Nonalcoholic fatty liver disease (NAFLD) can progress to advanced fibrosis, which, in the nonsurgical population, is associated with poor hepatic and extrahepatic outcomes. Despite its high prevalence, NAFLD and related liver fibrosis may be overlooked during the preoperative evaluation, and the role of liver fibrosis as an independent risk factor for surgical-related mortality has yet to be tested. The aim of this study was to assess whether fibrosis-4 (FIB-4), which consists of age, aspartate aminotransferase (AST), alanine aminotransferase (ALT), and platelets, a validated marker of liver fibrosis, is associated with postoperative mortality in the general surgical population. METHODS: A historical cohort of patients undergoing general anesthesia at an academic medical center between 2014 and 2018 was analyzed. Exclusion criteria included known liver disease, acute liver disease or hepatic failure, and alcohol use disorder. FIB-4 score was categorized into 3 validated predefined categories: FIB-4 ≤1.3, ruling out advanced fibrosis; >1.3 and <2.67, inconclusive; and ≥2.67, suggesting advanced fibrosis. The primary analytic method was propensity score matching (FIB-4 was dichotomized to indicate advanced fibrosis), and a secondary analysis included a multivariable logistic regression. RESULTS: Of 19,861 included subjects, 1995 (10%) had advanced fibrosis per FIB-4 criteria. Mortality occurred intraoperatively in 15 patients (0.1%), during hospitalization in 272 patients (1.4%), and within 30 days of surgery in 417 patients (2.1%). FIB-4 ≥2.67 was associated with increased intraoperative mortality (odds ratio [OR], 3.63; 95% confidence interval [CI], 1.25-10.58), mortality during hospitalization (OR, 3.14; 95% CI, 2.37-4.16), and within 30 days from surgery (OR, 2.46; 95% CI, 1.95-3.10), after adjusting for other risk factors. FIB-4 was related to increased mortality in a dose-dependent manner for the 3 FIB-4 categories ≤1.3 (reference), >1.3 and <2.67, and ≥2.67, respectively; during hospitalization (OR, 1.89; 95% CI, 1.34-2.65 and OR, 4.70; 95% CI, 3.27-6.76) and within 30 days from surgery (OR, 1.77; 95% CI, 1.36-2.31 and OR, 3.55; 95% CI, 2.65-4.77). In a 1:1 propensity-matched sample (N = 1994 per group), the differences in mortality remained. Comparing the FIB-4 ≥2.67 versus the FIB-4 <2.67 groups, respectively, mortality during hospitalization was 5.1% vs 2.2% (OR, 2.70; 95% CI, 1.81-4.02), and 30-day mortality was 6.6% vs 3.4% (OR, 2.26; 95% CI, 1.62-3.14). CONCLUSIONS: A simple liver fibrosis marker is strongly associated with perioperative mortality in a population without apparent liver disease, and may aid in future surgical risk stratification and preoperative optimization.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Humanos , Alanina Transaminase , Hepatopatia Gordurosa não Alcoólica/diagnóstico , Hepatopatia Gordurosa não Alcoólica/complicações , Hepatopatia Gordurosa não Alcoólica/patologia , Índice de Gravidade de Doença , Biópsia/efeitos adversos , Cirrose Hepática/diagnóstico , Cirrose Hepática/cirurgia , Cirrose Hepática/epidemiologia , Aspartato Aminotransferases , Fígado/patologia , Biomarcadores
8.
Medicina (Kaunas) ; 59(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36676652

RESUMO

Background and objective: There is no report of the rate of opioid prescription at the time of hospital discharge, which may be associated with various patient and procedure-related factors. This study examined the prevalence and factors associated with prescribing opioids for head/neck pain after elective craniotomy for tumor resection/vascular repair. Methods: We performed a retrospective cohort study on adults undergoing elective craniotomy for tumor resection/vascular repair at a large quaternary-care hospital. We used univariable and multivariable analysis to examine the prevalence and factors (pre-operative, intraoperative, and postoperative) associated with prescribing opioids at the time of hospital discharge. We also examined the factors associated with discharge oral morphine equivalent use. Results: The study sample comprised 273 patients with a median age of 54 years [IQR 41,65], 173 females (63%), 174 (63.7%) tumor resections, and 99 (36.2%) vascular repairs. The majority (n = 264, 96.7%) received opioids postoperatively. The opiate prescription rates were 72% (n = 196/273) at hospital discharge, 23% (19/83) at neurosurgical clinical visits within 30 days of the procedure, and 2.4% (2/83) after 30 days from the procedure. The median oral morphine equivalent (OME) at discharge use was 300 [IQR 175,600]. Patients were discharged with a median supply of 5 days [IQR 3,7]. On multivariable analysis, opioid prescription at hospital discharge was associated with pre-existent chronic pain (adjusted odds ratio, aOR 1.87 [1.06,3.29], p = 0.03) and time from surgery to hospital discharge (compared to patients discharged within days 1−4 postoperatively, patients discharged between days 5−12 (aOR 0.3, 95% CI [0.15; 0.59], p = 0.0005), discharged at 12 days and later (aOR 0.17, 95% CI [0.07; 0.39], p < 0.001)). There was a linear relationship between the first 24 h OME (p < 0.001), daily OME (p < 0.001), hospital OME (p < 0.001), and discharge OME. Conclusions: This single-center study finds that at the time of hospital discharge, opioids are prescribed for head/neck pain in as many as seven out of ten patients after elective craniotomy. A history of chronic pain and time from surgery to discharge may be associated with opiate prescriptions. Discharge OME may be associated with first 24-h, daily OME, and hospital OME use. Findings need further evaluation in a large multicenter sample. The findings are important to consider as there is growing interest in an early discharge after elective craniotomy.


Assuntos
Dor Crônica , Neoplasias , Alcaloides Opiáceos , Adulto , Feminino , Humanos , Analgésicos Opioides/uso terapêutico , Cervicalgia/tratamento farmacológico , Estudos Retrospectivos , Dor Crônica/tratamento farmacológico , Prevalência , Dor Pós-Operatória/tratamento farmacológico , Dor Pós-Operatória/epidemiologia , Morfina/uso terapêutico , Alta do Paciente , Cefaleia , Prescrições de Medicamentos , Alcaloides Opiáceos/uso terapêutico , Neoplasias/tratamento farmacológico
9.
J Card Surg ; 36(3): 879-885, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33442916

RESUMO

BACKGROUND AND AIM: Among patients receiving surgical bioprosthetic aortic valve replacement (bAVR), there is an elevated risk of thromboembolic events postoperatively. However, the risks and benefits of varying anticoagulation strategies remain controversial. The aim of this study is to compare the risks and benefits of aspirin monotherapy to aspirin plus warfarin ("concurrent therapy") in patients receiving bAVR. METHODS: A retrospective cohort study was conducted using patients' data from Kaiser Permanente Northern California, including those who underwent bAVR with or without coronary artery bypass grafting between 2009 and 2018. Patients were identified as having been discharged with aspirin only or concurrent therapy. The outcomes were mortality, thromboembolic events, and clinically relevant bleeding during a 6-month follow-up. The event rates were compared using the Kaplan-Meier method. Multivariable survival analysis, incorporating propensity scores, was used to estimate adjusted hazard ratios (aHRs) for each outcome. RESULTS: The cohort consisted of 3047 patients. Approximately 58% of patients received aspirin only and 42% received concurrent therapy. Patients who received concurrent therapy were more likely to be older, have hypertension, previous stroke, and longer hospital stays. After adjustment using multivariable analysis, concurrent therapy was associated with a higher risk of clinically relevant bleeding (aHR, 2.33; 95% confidence interval, 1.67-3.25). There was no significant difference in the risk of thromboembolic events or mortality between the two groups. CONCLUSION: Patients who underwent bAVR and were discharged on concurrent therapy compared to aspirin only had a significantly increased risk of bleeding without a significant difference in thromboembolic events.


Assuntos
Implante de Prótese de Valva Cardíaca , Próteses Valvulares Cardíacas , Anticoagulantes , Valva Aórtica/cirurgia , Humanos , Estudos Retrospectivos , Medição de Risco , Resultado do Tratamento
10.
Anesth Analg ; 130(5): 1201-1210, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32287127

RESUMO

BACKGROUND: Predictive analytics systems may improve perioperative care by enhancing preparation for, recognition of, and response to high-risk clinical events. Bradycardia is a fairly common and unpredictable clinical event with many causes; it may be benign or become associated with hypotension requiring aggressive treatment. Our aim was to build models to predict the occurrence of clinically significant intraoperative bradycardia at 3 time points during an operative course by utilizing available preoperative electronic medical record and intraoperative anesthesia information management system data. METHODS: The analyzed data include 62,182 scheduled noncardiac procedures performed at the University of Washington Medical Center between 2012 and 2017. The clinical event was defined as severe bradycardia (heart rate <50 beats per minute) followed by hypotension (mean arterial pressure <55 mm Hg) within a 10-minute window. We developed models to predict the presence of at least 1 event following 3 time points: induction of anesthesia (TP1), start of the procedure (TP2), and 30 minutes after the start of the procedure (TP3). Predictor variables were based on data available before each time point and included preoperative patient and procedure data (TP1), followed by intraoperative minute-to-minute patient monitor, ventilator, intravenous fluid, infusion, and bolus medication data (TP2 and TP3). Machine-learning and logistic regression models were developed, and their predictive abilities were evaluated using the area under the ROC curve (AUC). The contribution of the input variables to the models were evaluated. RESULTS: The number of events was 3498 (5.6%) after TP1, 2404 (3.9%) after TP2, and 1066 (1.7%) after TP3. Heart rate was the strongest predictor for events after TP1. Occurrence of a previous event, mean heart rate, and mean pulse rates before TP2 were the strongest predictor for events after TP2. Occurrence of a previous event, mean heart rate, mean pulse rates before TP2 (and their interaction), and 15-minute slopes in heart rate and blood pressure before TP2 were the strongest predictors for events after TP3. The best performing machine-learning models including all cases produced an AUC of 0.81 (TP1), 0.87 (TP2), and 0.89 (TP3) with positive predictive values of 0.30, 0.29, and 0.15 at 95% specificity, respectively. CONCLUSIONS: We developed models to predict unstable bradycardia leveraging preoperative and real-time intraoperative data. Our study demonstrates how predictive models may be utilized to predict clinical events across multiple time intervals, with a future goal of developing real-time, intraoperative, decision support.


Assuntos
Bradicardia/diagnóstico , Hipotensão/diagnóstico , Aprendizado de Máquina/tendências , Monitorização Intraoperatória/tendências , Bradicardia/fisiopatologia , Previsões , Humanos , Hipotensão/fisiopatologia , Monitorização Intraoperatória/métodos , Valor Preditivo dos Testes , Estudos Retrospectivos
11.
J Cardiothorac Vasc Anesth ; 34(7): 1815-1821, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31952905

RESUMO

OBJECTIVES: To investigate the opioid requirements and prevalence of chronic postsurgical pain (CPSP) in liver transplant (LT) recipients and to evaluate the association of opioid use with postoperative survival. DESIGN: Retrospective analysis. SETTING: A large academic medical center. PATIENTS: Cadaveric liver transplants recipients from 2008 to 2016. INTERVENTIONS: Analysis of demographic, perioperative, and outcome data. MEASUREMENTS AND MAIN RESULTS: This study measured the incidence and quantity of preoperative opioid use, postoperative opioid requirements, the incidence of CPSP, and survival in patients with and without CPSP. Opioid requirements were calculated in morphine milligram equivalents. In total, 322 LT recipients satisfied the inclusion criteria. The cohort of interest included 61 patients (18.9%) who were prescribed opioids before LT, compared to the control group of 261. Postoperative opioid requirements were significantly higher in the cohort of interest in the first 24 hours (205.9 ± 318.5 v 60.4 ± 33.6 mg, p < 0.0001) and at 7 days after transplant (57.0 ± 70.6 mg v 19.2 ± 15.4 mg, p < 0.0001). Incidence of CPSP was significantly higher in the cohort of interest at 3 months (70.5% v 45.5%, p < 0.0001), at 2 years (38% v 12%), and at 5 years (29.8% v 6.9%) postoperatively. CPSP was a significant risk factor for patient mortality after transplantation (p = 0.038, HR 1.26). CONCLUSIONS: Opioid use is relatively frequent in patients waiting for LT. It significantly affects the postoperative opioid requirements and the incidence of CSPS. CPSP may significantly affect survival after LT.


Assuntos
Dor Crônica , Transplante de Fígado , Transtornos Relacionados ao Uso de Opioides , Analgésicos Opioides/uso terapêutico , Dor Crônica/diagnóstico , Dor Crônica/tratamento farmacológico , Dor Crônica/epidemiologia , Humanos , Transplante de Fígado/efeitos adversos , Dor Pós-Operatória/diagnóstico , Dor Pós-Operatória/tratamento farmacológico , Dor Pós-Operatória/epidemiologia , Estudos Retrospectivos
12.
Neurocrit Care ; 33(2): 499-507, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-31974871

RESUMO

BACKGROUND: The prevalence, characteristics, and outcomes related to the ventilator-associated event(s) (VAE) in neurocritically ill patients are unknown and examined in this study. METHODS: A retrospective study was performed on neurocritically ill patients at a 413-bed level 1 trauma and stroke center who received three or more days of mechanical ventilation to describe rates of VAE, describe characteristics of patients with VAE, and examine the association of VAE on ventilator days, mortality, length of stay, and discharge to home. RESULTS: Over a 5-year period from 2014 through 2018, 855 neurocritically ill patients requiring mechanical ventilation were identified. A total of 147 VAEs occurred in 130 (15.2%) patients with an overall VAE rate of 13 per 1000 ventilator days and occurred across age, sex, BMI, and admission Glasgow Coma Scores. The average time from the start of ventilation to a VAE was 5 (range 3-48) days after initiation of mechanical ventilation. Using Centers for Disease Control and Prevention definitions, VAEs met criteria for a ventilator-associated condition in 58% of events (n = 85), infection-related VAE in 22% of events (n = 33), and possible ventilator-associated pneumonia in 20% of events (n = 29). A most common trigger for VAE was an increase in positive end-expiratory pressure (84%). Presence of a VAE was associated with an increase in duration of mechanical ventilation (17.4[IQR 20.5] vs. 7.9[8.9] days, p < 0.001, 95% CI 7.86-13.92), intensive care unit (ICU) length of stay (20.2[1.1] vs. 12.5[0.4] days, p < 0.001 95% CI 5.3-10.02), but not associated with in-patient mortality (34.1 vs. 31.3%. 95% CI 0.76-1.69) or discharge to home (12.7% vs. 16.3%, 95% 0.47-1.29). CONCLUSIONS: VAE are prevalent in the neurocritically ill. They result in an increased duration of mechanical ventilation and ICU length of stay, but may not be associated with in-hospital mortality or discharge to home.


Assuntos
Pneumonia Associada à Ventilação Mecânica , Ventiladores Mecânicos , Humanos , Unidades de Terapia Intensiva , Pneumonia Associada à Ventilação Mecânica/epidemiologia , Pneumonia Associada à Ventilação Mecânica/etiologia , Prevalência , Respiração Artificial/efeitos adversos , Estudos Retrospectivos
13.
J Surg Res ; 238: 119-126, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30769248

RESUMO

BACKGROUND: The Laboratory Risk Indicator for Necrotizing Fasciitis (LRINEC) score may distinguish necrotizing soft tissue infection (NSTI) from non-NSTI. The association of higher preoperative LRINEC scores with escalations of intraoperative anesthesia care in NSTI is unknown and may be useful in communicating illness severity during patient handoffs. MATERIALS AND METHODS: We conducted a retrospective cohort study of first operative debridement for suspected NSTI in a single referral center from 2013 to 2016. We assessed the association between LRINEC score and vasopressors, blood products, crystalloid, invasive monitoring, and minutes of operative and anesthesia care. RESULTS: We captured 332 patients undergoing their first operative debridement for suspected NSTI. For every 1-point higher LRINEC score, there was a higher risk of norepinephrine and vasopressin use (relative risk [RR] = 18%, 95% confidence interval [CI] [10%, 26%] and [10%, 27%], respectively), packed red blood cell use (RR = 28% [95% CI 13%, 45%]), and additional crystalloid (17.57 mL/h [95% CI 0.37, 34.76]). Each additional LRINEC point was associated with longer anesthesia (3.42 min, 95% CI 0.94, 5.91) and operative times (2.35 min, 95% CI 0.29, 4.40) and a higher risk of receiving invasive arterial monitoring (RR 1.11, 95% CI 1.05, 1.18). The negative predictive value for an LRINEC score < 6 was > 90% for use of vasopressors and packed red blood cells. CONCLUSIONS: Preoperative LRINEC scores were associated with escalations in intraoperative care in patients with NSTI. A low score may predict patients unlikely to require vasopressors or blood and may be useful in standardized handoff tools for patients with NSTI.


Assuntos
Anestesia/métodos , Cuidados Intraoperatórios/métodos , Índice de Gravidade de Doença , Infecções dos Tecidos Moles/diagnóstico , Adulto , Transfusão de Componentes Sanguíneos/estatística & dados numéricos , Desbridamento/efeitos adversos , Diagnóstico Diferencial , Fasciite Necrosante/diagnóstico , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Necrose/diagnóstico , Necrose/cirurgia , Duração da Cirurgia , Período Pré-Operatório , Curva ROC , Estudos Retrospectivos , Fatores de Risco , Infecções dos Tecidos Moles/cirurgia
14.
J Cardiothorac Vasc Anesth ; 33(10): 2728-2734, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31072702

RESUMO

OBJECTIVES: To analyze preoperative tumor thrombus progression and occurrence of perioperative pulmonary embolism (PE) in patients with inferior vena cava tumor thrombus resection. DESIGN: Retrospective analysis. SETTINGS: University of Washington Medical Center. PARTICIPANTS: Patients who had undergone inferior vena cava tumor resection with thrombectomy from 2014 to 2017. INTERVENTIONS: Analysis of demographic, perioperative, and outcome data. Variables were compared between groups according to the level of tumor thrombus, the timing of the preoperative imaging, and the occurrence of perioperative PE. MEASUREMENTS AND MAIN RESULTS: Incidence, outcomes, and variables associated with perioperative PE and sensitivity/specificity analyses for optimized preoperative imaging timing, broken into 7-day increments, were assessed. Fifty-six patients were included in this analysis. Perioperative PE was observed in 6 (11%) patients, intraoperatively in 5 patients and in the early postoperative period in 1 patient. Of the 5 patients with intraoperative PE, 2 died intraoperatively. Perioperative PE occurred in 1 patient with tumor thrombus level I, in 2 patients with level II, in 2 patients with level III, and in 1 patient with level IV. Risks of preoperative tumor thrombus progression were minimized if the imaging study was performed within 3 weeks for level I and II tumor thrombi and within 1 week for level III tumor thrombus. CONCLUSIONS: Perioperative PE was observed in patients with all levels of tumor thrombus. Fifty percent of perioperative PE were observed in patients with infrahepatic tumor thrombus. Post-imaging progression of tumor thrombus was unlikely if the surgery was performed within 3 weeks in patients with levels I or II tumor thrombus or within 1 week in patients with level III tumor thrombus.


Assuntos
Neoplasias Renais/cirurgia , Assistência Perioperatória/tendências , Embolia Pulmonar/etiologia , Trombectomia/efeitos adversos , Veia Cava Inferior/cirurgia , Trombose Venosa/cirurgia , Adulto , Idoso , Feminino , Humanos , Complicações Intraoperatórias/diagnóstico por imagem , Complicações Intraoperatórias/etiologia , Neoplasias Renais/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Assistência Perioperatória/métodos , Embolia Pulmonar/diagnóstico por imagem , Estudos Retrospectivos , Trombectomia/tendências , Veia Cava Inferior/diagnóstico por imagem , Trombose Venosa/diagnóstico por imagem
15.
J Am Soc Nephrol ; 26(6): 1434-42, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25475746

RESUMO

The capacity of risk prediction to guide management of CKD in underserved health settings is unknown. We conducted a retrospective cohort study of 28,779 adults with nondialysis-requiring CKD who received health care in two large safety net health systems during 1996-2009 and were followed for ESRD through September of 2011. We developed and evaluated the performance of ESRD risk prediction models using recently proposed criteria designed to inform population health approaches to disease management: proportion of cases followed and proportion that needs to be followed. Overall, 1730 persons progressed to ESRD during follow-up (median follow-up=6.6 years). ESRD risk for time frames up to 5 years was highly concentrated among relatively few individuals. A predictive model using five common variables (age, sex, race, eGFR, and dipstick proteinuria) performed similarly to more complex models incorporating extensive sociodemographic and clinical data. Using this model, 80% of individuals who eventually developed ESRD were among the 5% of cohort members at the highest estimated risk for ESRD at 1 year. Similarly, a program that followed 8% and 13% of individuals at the highest ESRD risk would have included 80% of those who eventually progressed to ESRD at 3 and 5 years, respectively. In this underserved health setting, a simple five-variable model accurately predicts most cases of ESRD that develop within 5 years. Applying risk prediction using a population health approach may improve CKD surveillance and management of vulnerable groups by directing resources to a small subpopulation at highest risk for progressing to ESRD.


Assuntos
Progressão da Doença , Falência Renal Crônica/diagnóstico , Falência Renal Crônica/epidemiologia , Pobreza , Adulto , Distribuição por Idade , Idoso , Estudos de Coortes , Feminino , Taxa de Filtração Glomerular , Humanos , Incidência , Falência Renal Crônica/terapia , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Modelos de Riscos Proporcionais , Insuficiência Renal Crônica/diagnóstico , Insuficiência Renal Crônica/epidemiologia , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco , Distribuição por Sexo , Taxa de Sobrevida , População Urbana , Washington/epidemiologia
16.
Proc Natl Acad Sci U S A ; 109(35): E2343-52, 2012 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-22837397

RESUMO

Genome-wide association studies can identify common differences that contribute to human phenotypic diversity and disease. When genome-wide association studies are combined with approaches that test how variants alter physiology, biological insights can emerge. Here, we used such an approach to reveal regulation of cell death by the methionine salvage pathway. A common SNP associated with reduced expression of a putative methionine salvage pathway dehydratase, apoptotic protease activating factor 1 (APAF1)-interacting protein (APIP), was associated with increased caspase-1-mediated cell death in response to Salmonella. The role of APIP in methionine salvage was confirmed by growth assays with methionine-deficient media and quantitation of the methionine salvage substrate, 5'-methylthioadenosine. Reducing expression of APIP or exogenous addition of 5'-methylthioadenosine increased Salmonellae-induced cell death. Consistent with APIP originally being identified as an inhibitor of caspase-9-dependent apoptosis, the same allele was also associated with increased sensitivity to the chemotherapeutic agent carboplatin. Our results show that common human variation affecting expression of a single gene can alter susceptibility to two distinct cell death programs. Furthermore, the same allele that promotes cell death is associated with improved survival of individuals with systemic inflammatory response syndrome, suggesting a possible evolutionary pressure that may explain the geographic pattern observed for the frequency of this SNP. Our study shows that in vitro association screens of disease-related traits can not only reveal human genetic differences that contribute to disease but also provide unexpected insights into cell biology.


Assuntos
Proteínas Reguladoras de Apoptose/genética , Apoptose/fisiologia , Caspase 1/genética , Metionina/metabolismo , Infecções por Salmonella , Salmonella typhimurium/imunologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Animais , Apoptose/genética , Proteínas Reguladoras de Apoptose/metabolismo , Células da Medula Óssea/citologia , Caspase 1/metabolismo , Caspase 9/metabolismo , Desoxiadenosinas/metabolismo , Predisposição Genética para Doença/genética , Variação Genética , Células HEK293 , Projeto HapMap , Humanos , Camundongos , Camundongos Endogâmicos C57BL , Pessoa de Meia-Idade , Polimorfismo de Nucleotídeo Único/genética , Locos de Características Quantitativas , Infecções por Salmonella/genética , Infecções por Salmonella/metabolismo , Infecções por Salmonella/patologia , Tionucleosídeos/metabolismo , Adulto Jovem
17.
BMC Genomics ; 15: 355, 2014 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-24886041

RESUMO

BACKGROUND: Shigella dysenteriae type 1 (Sd1) causes recurrent epidemics of dysentery associated with high mortality in many regions of the world. Sd1 infects humans at very low infectious doses (10 CFU), and treatment is complicated by the rapid emergence of antibiotic resistant Sd1 strains. Sd1 is only detected in the context of human infections, and the circumstances under which epidemics emerge and regress remain unknown. RESULTS: Phylogenomic analyses of 56 isolates collected worldwide over the past 60 years indicate that the Sd1 clone responsible for the recent pandemics emerged at the turn of the 20th century, and that the two world wars likely played a pivotal role for its dissemination. Several lineages remain ubiquitous and their phylogeny indicates several recent intercontinental transfers. Our comparative genomics analysis reveals that isolates responsible for separate outbreaks, though closely related to one another, have independently accumulated antibiotic resistance genes, suggesting that there is little or no selection to retain these genes in-between outbreaks. The genomes appear to be subjected to genetic drift that affects a number of functions currently used by diagnostic tools to identify Sd1, which could lead to the potential failure of such tools. CONCLUSIONS: Taken together, the Sd1 population structure and pattern of evolution suggest a recent emergence and a possible human carrier state that could play an important role in the epidemic pattern of infections of this human-specific pathogen. This analysis highlights the important role of whole-genome sequencing in studying pathogens for which epidemiological or laboratory investigations are particularly challenging.


Assuntos
Disenteria Bacilar/epidemiologia , Shigella dysenteriae/genética , Antibacterianos/farmacologia , Surtos de Doenças , Farmacorresistência Bacteriana/efeitos dos fármacos , Disenteria Bacilar/história , Evolução Molecular , Variação Genética , Genoma Bacteriano , Genômica , Sequenciamento de Nucleotídeos em Larga Escala , História do Século XX , Humanos , Filogenia , Análise de Sequência de DNA , Shigella dysenteriae/classificação , Shigella dysenteriae/isolamento & purificação
19.
J Clin Med ; 13(4)2024 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-38398345

RESUMO

BACKGROUND: To examine the association between external ventricular drain (EVD) placement, critical care utilization, complications, and clinical outcomes in hospitalized adults with spontaneous subarachnoid hemorrhage (SAH). METHODS: A single-center retrospective study included SAH patients 18 years and older, admitted between 1 January 2014 and 31 December 2022. The exposure variable was EVD. The primary outcomes of interest were (1) early mortality (<72 h), (2) overall mortality, (3) improvement in modified-World Federation of Neurological Surgeons (m-WFNSs) grade between admission and discharge, and (4) discharge to home at the end of the hospital stay. We adjusted for admission m-WFNS grade, age, sex, race/ethnicity, intraventricular hemorrhage, aneurysmal cause of SAH, mechanical ventilation, critical care utilization, and complications within a multivariable analysis. We reported adjusted odds ratios (aORs) and 95% confidence intervals (CI). RESULTS: The study sample included 1346 patients: 18% (n = 243) were between the ages of 18 and 44 years, 48% (n = 645) were between the age of 45-64 years, and 34% (n = 458) were 65 years and older, with other statistics of females (56%, n = 756), m-WFNS I-III (57%, n = 762), m-WFNS IV-V (43%, n = 584), 51% mechanically ventilated, 76% White (n = 680), and 86% English-speaking (n = 1158). Early mortality occurred in 11% (n = 142). Overall mortality was 21% (n = 278), 53% (n = 707) were discharged to their home, and 25% (n = 331) improved their m-WFNS between admission and discharge. Altogether, 54% (n = 731) received EVD placement. After adjusting for covariates, the results of the multivariable analysis demonstrated that EVD placement was associated with reduced early mortality (aOR 0.21 [0.14, 0.33]), an improvement in m-WFNS grade (aOR 2.06 [1.42, 2.99]) but not associated with overall mortality (aOR 0.69 [0.47, 1.00]) or being discharged home at the end of the hospital stay (aOR 1.00 [0.74, 1.36]). EVD was associated with a higher rate of ventilator-associated pneumonia (aOR 2.32 [1.03, 5.23]), delirium (aOR 1.56 [1.05, 2.32]), and a longer ICU (aOR 1.33 [1.29;1.36]) and hospital length of stay (aOR 1.09 [1.07;1.10]). Critical care utilization was also higher in patients with EVD compared to those without. CONCLUSIONS: The study suggests that EVD placement in hospitalized adults with spontaneous subarachnoid hemorrhage (SAH) is associated with reduced early mortality and improved neurological recovery, albeit with higher critical care utilization and complications. These findings emphasize the potential clinical benefits of EVD placement in managing SAH. However, further research and prospective studies may be necessary to validate these results and provide a more comprehensive understanding of the factors influencing clinical outcomes in SAH.

20.
JAMA Surg ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38837145

RESUMO

Importance: General-domain large language models may be able to perform risk stratification and predict postoperative outcome measures using a description of the procedure and a patient's electronic health record notes. Objective: To examine predictive performance on 8 different tasks: prediction of American Society of Anesthesiologists Physical Status (ASA-PS), hospital admission, intensive care unit (ICU) admission, unplanned admission, hospital mortality, postanesthesia care unit (PACU) phase 1 duration, hospital duration, and ICU duration. Design, Setting, and Participants: This prognostic study included task-specific datasets constructed from 2 years of retrospective electronic health records data collected during routine clinical care. Case and note data were formatted into prompts and given to the large language model GPT-4 Turbo (OpenAI) to generate a prediction and explanation. The setting included a quaternary care center comprising 3 academic hospitals and affiliated clinics in a single metropolitan area. Patients who had a surgery or procedure with anesthesia and at least 1 clinician-written note filed in the electronic health record before surgery were included in the study. Data were analyzed from November to December 2023. Exposures: Compared original notes, note summaries, few-shot prompting, and chain-of-thought prompting strategies. Main Outcomes and Measures: F1 score for binary and categorical outcomes. Mean absolute error for numerical duration outcomes. Results: Study results were measured on task-specific datasets, each with 1000 cases with the exception of unplanned admission, which had 949 cases, and hospital mortality, which had 576 cases. The best results for each task included an F1 score of 0.50 (95% CI, 0.47-0.53) for ASA-PS, 0.64 (95% CI, 0.61-0.67) for hospital admission, 0.81 (95% CI, 0.78-0.83) for ICU admission, 0.61 (95% CI, 0.58-0.64) for unplanned admission, and 0.86 (95% CI, 0.83-0.89) for hospital mortality prediction. Performance on duration prediction tasks was universally poor across all prompt strategies for which the large language model achieved a mean absolute error of 49 minutes (95% CI, 46-51 minutes) for PACU phase 1 duration, 4.5 days (95% CI, 4.2-5.0 days) for hospital duration, and 1.1 days (95% CI, 0.9-1.3 days) for ICU duration prediction. Conclusions and Relevance: Current general-domain large language models may assist clinicians in perioperative risk stratification on classification tasks but are inadequate for numerical duration predictions. Their ability to produce high-quality natural language explanations for the predictions may make them useful tools in clinical workflows and may be complementary to traditional risk prediction models.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa