Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 67
Filter
2.
J Cardiothorac Vasc Anesth ; 38(5): 1181-1189, 2024 May.
Article in English | MEDLINE | ID: mdl-38472029

ABSTRACT

OBJECTIVE: This study assessed the efficacy of palonosetron, alone or with dexamethasone, in reducing postoperative nausea and/or vomiting (PONV) and its impact on hospitalization duration in patients who undergo adult cardiothoracic surgery (CTS) under general anesthesia. DESIGN: This retrospective analysis involved 540 adult patients who underwent CTS from a single-center cohort, spanning surgeries between September 2021 and March 2023. Sensitivity, logistic, and Cox regression analyses evaluated antiemetic effects, PONV risk factors, and outcomes. SETTING: At the Virginia Mason Medical Center (VMMC), Seattle, WA. PARTICIPANTS: Adults undergoing cardiothoracic surgery at VMMC during the specified period. INTERVENTIONS: Patients were categorized into the following 4 groups based on antiemetic treatment: dexamethasone, palonosetron, dexamethasone with palonosetron, and no antiemetic. MEASUREMENTS AND MAIN RESULTS: Primary outcomes encompassed PONV incidence within 96 hours postoperatively. Secondary outcomes included intensive care unit stay duration and postoperative opioid use. Palonosetron recipients showed a significantly lower PONV rate of 42% (v controls at 63%). The dexamethasone and palonosetron combined group also demonstrated a lower rate of 40%. Sensitivity analysis revealed a notably lower 0- to 12-hour PONV rate for palonosetron recipients (9% v control at 28%). Logistic regression found decreased PONV risk (palonosetron odds ratio [OR]: 0.24; dexamethasone and palonosetron OR: 0.26). Cox regression identified varying PONV hazard ratios related to female sex, PONV history, and lower body mass index. CONCLUSIONS: This single-center retrospective study underscored palonosetron's efficacy, alone or combined with dexamethasone, in managing PONV among adult patients who undergo CTS. These findings contribute to evolving antiemetic strategies in cardiothoracic surgery, potentially impacting patient outcomes and satisfaction positively.


Subject(s)
Antiemetics , Postoperative Nausea and Vomiting , Adult , Humans , Female , Palonosetron , Postoperative Nausea and Vomiting/epidemiology , Postoperative Nausea and Vomiting/prevention & control , Postoperative Nausea and Vomiting/drug therapy , Antiemetics/therapeutic use , Retrospective Studies , Anesthesia, General/adverse effects , Dexamethasone/therapeutic use
3.
Br J Anaesth ; 131(5): 796-801, 2023 11.
Article in English | MEDLINE | ID: mdl-37879776

ABSTRACT

Commercial aviation practices including the role of the pilot monitoring, the sterile flight deck rule, and computerised checklists have direct applicability to anaesthesia care. The pilot monitoring performs specific tasks that complement the pilot flying who is directly controlling the aircraft flight path. The anaesthesia care team, with two providers, can be organised in a manner that is analogous to the two-pilot flight deck. However, solo providers, such as solo pilots, can emulate the pilot monitoring role by reading checklists aloud, and utilise non-anaesthesia providers to fulfil some of the functions of pilot monitoring. The sterile flight deck rule states that flight crew members should not engage in any non-essential or distracting activity during critical phases of flight. The application of the sterile flight deck rule in anaesthesia practice entails deliberately minimising distractions during critical phases of anaesthesia care. Checklists are commonly used in the operating room, especially the World Health Organization surgical safety checklist. However, the use of aviation-style computerised checklists offers additional benefits. Here we discuss how these commercial aviation practices may be applied in the operating room.


Subject(s)
Anesthesia , Anesthesiology , Aviation , Humans , Checklist , Operating Rooms , Aircraft
4.
J Neurosurg Anesthesiol ; 35(2): 215-223, 2023 Apr 01.
Article in English | MEDLINE | ID: mdl-34759236

ABSTRACT

BACKGROUND: Traumatic brain injury (TBI) is a major cause of death and disability. Episodes of hypotension are associated with worse TBI outcomes. Our aim was to model the real-time risk of intraoperative hypotension in TBI patients, compare machine learning and traditional modeling techniques, and identify key contributory features from the patient monitor and medical record for the prediction of intraoperative hypotension. METHODS: The data included neurosurgical procedures in 1005 TBI patients at an academic level 1 trauma center. The clinical event was intraoperative hypotension, defined as mean arterial pressure <65 mm Hg for 5 or more consecutive minutes. Two types of models were developed: one based on preoperative patient-level predictors and one based on intraoperative predictors measured per minute. For each of these models, we took 2 approaches to predict the occurrence of a hypotensive event: a logistic regression model and a gradient boosting tree model. RESULTS: The area under the receiver operating characteristic curve for the intraoperative logistic regression model was 0.80 (95% confidence interval [CI]: 0.78-0.83), and for the gradient boosting model was 0.83 (95% CI: 0.81-0.85). The area under the precision-recall curve for the intraoperative logistic regression model was 0.16 (95% CI: 0.12-0.20), and for the gradient boosting model was 0.19 (95% CI: 0.14-0.24). Model performance based on preoperative predictors was poor. Features derived from the recent trend of mean arterial pressure emerged as dominantly predictive in both intraoperative models. CONCLUSIONS: This study developed a model for real-time prediction of intraoperative hypotension in TBI patients, which can use computationally efficient machine learning techniques and a streamlined feature-set derived from patient monitor data.


Subject(s)
Brain Injuries, Traumatic , Hypotension , Humans , Hypotension/diagnosis , Hypotension/etiology , Hypotension/epidemiology , Machine Learning , Arterial Pressure , Brain Injuries, Traumatic/complications , Brain Injuries, Traumatic/surgery , ROC Curve
5.
J Clin Monit Comput ; 37(1): 155-163, 2023 02.
Article in English | MEDLINE | ID: mdl-35680771

ABSTRACT

Machine Learning (ML) models have been developed to predict perioperative clinical parameters. The objective of this study was to determine if ML models can serve as decision aids to improve anesthesiologists' prediction of peak intraoperative glucose values and postoperative opioid requirements. A web-based tool was used to present actual surgical case and patient information to 10 practicing anesthesiologists. They were asked to predict peak glucose levels and post-operative opioid requirements for 100 surgical patients with and without presenting ML model estimations of peak glucose and opioid requirements. The accuracies of the anesthesiologists' estimates with and without ML estimates as reference were compared. A questionnaire was also sent to the participating anesthesiologists to obtain their feedback on ML decision support. The accuracy of peak glucose level estimates by the anesthesiologists increased from 79.0 ± 13.7% without ML assistance to 84.7 ± 11.5% (< 0.001) when ML estimates were provided as reference. The accuracy of opioid requirement estimates increased from 18% without ML assistance to 42% (p < 0.001) when ML estimates were provided as reference. When ML estimates were provided, predictions of peak glucose improved for 8 out of the 10 anesthesiologists, while predictions of opioid requirements improved for 7 of the 10 anesthesiologists. Feedback questionnaire responses revealed that the anesthesiologist primarily used the ML estimates as reference to modify their clinical judgement. ML models can improve anesthesiologists' estimation of clinical parameters. ML predictions primarily served as reference information that modified an anesthesiologist's clinical estimate.


Subject(s)
Analgesics, Opioid , Anesthesiologists , Humans , Analgesics, Opioid/therapeutic use , Machine Learning , Glucose , Decision Support Techniques
6.
AANA J ; 90(4): 263-270, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35943751

ABSTRACT

The effectiveness of propofol infusion on postoperative nausea and vomiting (PONV) is poorly understood in relation to various patient and procedure characteristics. This retrospective cohort study aimed to quantify the effectiveness of propofol infusion when administered either via total intravenous administration (TIVA) or combined intravenous anesthesia (CIVA) with inhalational agents on PONV. The relationship between propofol infusion and PONV was characterized controlling for patient demographics, procedure characteristics, PONV risk factors, and antiemetic drugs in adult patients (age ≥18 years) undergoing general anesthesia. Learned coefficients from multivariate regression models were reported as "lift" which represents the percentage change in the base likelihood of observing PONV if a variable is present versus absent. In a total of 41,490 patients, models showed that propofol infusion has a naive effect on PONV with a lift of -41% (P < .001) when using TIVA and -17% (P < .001) when using CIVA. Adding interaction terms to the model resulted in the loss of statistical significance in these relationships (lift of -30%, P = .23, when using TIVA, and -42%, P = .36, when using CIVA). It was further found that CIVA/TIVA are ineffective in short cases (CIVA * short surgery duration: lift = 49%, P < .001 and TIVA * short surgery duration: lift = 56%, P < .001).


Subject(s)
Postoperative Nausea and Vomiting , Propofol , Adolescent , Adult , Anesthesia, Intravenous , Anesthetics, Intravenous/adverse effects , Data Science , Humans , Postoperative Nausea and Vomiting/prevention & control , Propofol/adverse effects , Retrospective Studies
7.
J Neurosurg Anesthesiol ; 34(1): e34-e39, 2022 Jan 01.
Article in English | MEDLINE | ID: mdl-32149890

ABSTRACT

INTRODUCTION: The exposure of anesthesiologists to organ recovery procedures and the anesthetic technique used during organ recovery has not been systematically studied in the United States. METHODS: A retrospective cohort study was conducted on all adult and pediatric patients who were declared brain dead between January 1, 2008, and June 30, 2019, and who progressed to organ donation at Harborview Medical Center. We describe the frequency of directing anesthetic care by attending anesthesiologists, anesthetic technique, and donor management targets during organ recovery. RESULTS: In a cohort of 327 patients (286 adults and 41 children), the most common cause of brain death was traumatic brain injury (51.1%). Kidneys (94.4%) and liver (87.4%) were the most common organs recovered. On average, each year, an attending anesthesiologist cared for 1 (range: 1 to 7) brain-dead donor during organ retrieval. The average anesthetic time was 127±53.5 (mean±SD) minutes. Overall, 90% of patients received a neuromuscular blocker, 63.3% an inhaled anesthetic, and 33.9% an opioid. Donor management targets were achieved as follows: mean arterial pressure ≥70 mm Hg (93%), normothermia (96%), normoglycemia (84%), urine output >1 to 3 mL/kg/h (61%), and lung-protective ventilation (58%). CONCLUSIONS: During organ recovery from brain-dead organ donors, anesthesiologists commonly administer neuromuscular blockers, inhaled anesthetics, and opioids, and strive to achieve donor management targets. While infrequently being exposed to these cases, it is expected that all anesthesiologists be cognizant of the physiological perturbations in brain-dead donors and achieve physiological targets to preserve end-organ function. These findings warrant further examination in a larger multi-institutional cohort.


Subject(s)
Anesthetics , Brain Death , Adult , Brain , Child , Humans , Retrospective Studies , Tissue Donors , United States
8.
Br J Anaesth ; 128(4): 623-635, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34924175

ABSTRACT

BACKGROUND: Postoperative hypotension is associated with adverse outcomes, but intraoperative prediction of postanaesthesia care unit (PACU) hypotension is not routine in anaesthesiology workflow. Although machine learning models may support clinician prediction of PACU hypotension, clinician acceptance of prediction models is poorly understood. METHODS: We developed a clinically informed gradient boosting machine learning model using preoperative and intraoperative data from 88 446 surgical patients from 2015 to 2019. Nine anaesthesiologists each made 192 predictions of PACU hypotension using a web-based visualisation tool with and without input from the machine learning model. Questionnaires and interviews were analysed using thematic content analysis for model acceptance by anaesthesiologists. RESULTS: The model predicted PACU hypotension in 17 029 patients (area under the receiver operating characteristic [AUROC] 0.82 [95% confidence interval {CI}: 0.81-0.83] and average precision 0.40 [95% CI: 0.38-0.42]). On a random representative subset of 192 cases, anaesthesiologist performance improved from AUROC 0.67 (95% CI: 0.60-0.73) to AUROC 0.74 (95% CI: 0.68-0.79) with model predictions and information on risk factors. Anaesthesiologists perceived more value and expressed trust in the prediction model for prospective planning, informing PACU handoffs, and drawing attention to unexpected cases of PACU hypotension, but they doubted the model when predictions and associated features were not aligned with clinical judgement. Anaesthesiologists expressed interest in patient-specific thresholds for defining and treating postoperative hypotension. CONCLUSIONS: The ability of anaesthesiologists to predict PACU hypotension was improved by exposure to machine learning model predictions. Clinicians acknowledged value and trust in machine learning technology. Increasing familiarity with clinical use of model predictions is needed for effective integration into perioperative workflows.


Subject(s)
Hypotension , Postoperative Complications , Humans , Hypotension/diagnosis , Hypotension/etiology , Machine Learning , Prospective Studies , ROC Curve
9.
J Clin Monit Comput ; 35(3): 607-616, 2021 05.
Article in English | MEDLINE | ID: mdl-32405801

ABSTRACT

Critical patient care information is often omitted or misunderstood during handoffs, which can lead to inefficiencies, delays, and sometimes patient harm. We implemented an aviation-style post-anesthesia care unit (PACU) handoff checklist displayed on a tablet computer to improve PACU handoff communication. We developed an aviation-style computerized checklist system for use in procedural rooms and adapted it for tablet computers to facilitate the performance of PACU handoffs. We then compared the proportion of PACU handoff items communicated before and after the implementation of the PACU handoff checklist on a tablet computer. A trained observer recorded the proportion of PACU handoff information items communicated, any resistance during the performance of the checklist, the type of provider participating in the handoff, and the time required to perform the handoff. We also obtained these patient outcomes: PACU length of stay, respiratory events, post-operative nausea and vomiting, and pain. A total of 209 PACU handoffs were observed before and 210 after the implementation of the tablet-based PACU handoff checklist. The average proportion of PACU handoff items communicated increased from 49.3% (95% CI 47.7-51.0%) before checklist implementation to 72.0% (95% CI 69.2-74.9%) after checklist implementation (p < 0.001). A tablet-based aviation-style handoff checklist resulted in an increase in PACU handoff items communicated, but did not have an effect on patient outcomes.


Subject(s)
Anesthesia , Aviation , Patient Handoff , Checklist , Communication , Computers, Handheld , Humans
10.
PLoS One ; 15(7): e0236833, 2020.
Article in English | MEDLINE | ID: mdl-32735604

ABSTRACT

Opioids play a critical role in acute postoperative pain management. Our objective was to develop machine learning models to predict postoperative opioid requirements in patients undergoing ambulatory surgery. To develop the models, we used a perioperative dataset of 13,700 patients (≥ 18 years) undergoing ambulatory surgery between the years 2016-2018. The data, comprising of patient, procedure and provider factors that could influence postoperative pain and opioid requirements, was randomly split into training (80%) and validation (20%) datasets. Machine learning models of different classes were developed to predict categorized levels of postoperative opioid requirements using the training dataset and then evaluated on the validation dataset. Prediction accuracy was used to differentiate model performances. The five types of models that were developed returned the following accuracies at two different stages of surgery: 1) Prior to surgery-Multinomial Logistic Regression: 71%, Naïve Bayes: 67%, Neural Network: 30%, Random Forest: 72%, Extreme Gradient Boost: 71% and 2) End of surgery-Multinomial Logistic Regression: 71%, Naïve Bayes: 63%, Neural Network: 32%, Random Forest: 72%, Extreme Gradient Boost: 70%. Analyzing the sensitivities of the best performing Random Forest model showed that the lower opioid requirements are predicted with better accuracy (89%) as compared with higher opioid requirements (43%). Feature importance (% relative importance) of model predictions showed that the type of procedure (15.4%), medical history (12.9%) and procedure duration (12.0%) were the top three features contributing to model predictions. Overall, the contribution of patient and procedure features towards model predictions were 65% and 35% respectively. Machine learning models could be used to predict postoperative opioid requirements in ambulatory surgery patients and could potentially assist in better management of their postoperative acute pain.


Subject(s)
Ambulatory Surgical Procedures , Analgesics, Opioid/therapeutic use , Machine Learning , Pain, Postoperative/drug therapy , Aged , Female , Humans , Male , Middle Aged , Models, Theoretical , Pain Management/methods
11.
Nat Mach Intell ; 2(1): 56-67, 2020 Jan.
Article in English | MEDLINE | ID: mdl-32607472

ABSTRACT

Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are popular non-linear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here, we improve the interpretability of tree-based models through three main contributions: 1) The first polynomial time algorithm to compute optimal explanations based on game theory. 2) A new type of explanation that directly measures local feature interaction effects. 3) A new set of tools for understanding global model structure based on combining many local explanations of each prediction. We apply these tools to three medical machine learning problems and show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to i) identify high magnitude but low frequency non-linear mortality risk factors in the US population, ii) highlight distinct population sub-groups with shared risk characteristics, iii) identify non-linear interaction effects among risk factors for chronic kidney disease, and iv) monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of tree-based machine learning models, these improvements to their interpretability have implications across a broad set of domains.

13.
Anesth Analg ; 130(5): 1201-1210, 2020 05.
Article in English | MEDLINE | ID: mdl-32287127

ABSTRACT

BACKGROUND: Predictive analytics systems may improve perioperative care by enhancing preparation for, recognition of, and response to high-risk clinical events. Bradycardia is a fairly common and unpredictable clinical event with many causes; it may be benign or become associated with hypotension requiring aggressive treatment. Our aim was to build models to predict the occurrence of clinically significant intraoperative bradycardia at 3 time points during an operative course by utilizing available preoperative electronic medical record and intraoperative anesthesia information management system data. METHODS: The analyzed data include 62,182 scheduled noncardiac procedures performed at the University of Washington Medical Center between 2012 and 2017. The clinical event was defined as severe bradycardia (heart rate <50 beats per minute) followed by hypotension (mean arterial pressure <55 mm Hg) within a 10-minute window. We developed models to predict the presence of at least 1 event following 3 time points: induction of anesthesia (TP1), start of the procedure (TP2), and 30 minutes after the start of the procedure (TP3). Predictor variables were based on data available before each time point and included preoperative patient and procedure data (TP1), followed by intraoperative minute-to-minute patient monitor, ventilator, intravenous fluid, infusion, and bolus medication data (TP2 and TP3). Machine-learning and logistic regression models were developed, and their predictive abilities were evaluated using the area under the ROC curve (AUC). The contribution of the input variables to the models were evaluated. RESULTS: The number of events was 3498 (5.6%) after TP1, 2404 (3.9%) after TP2, and 1066 (1.7%) after TP3. Heart rate was the strongest predictor for events after TP1. Occurrence of a previous event, mean heart rate, and mean pulse rates before TP2 were the strongest predictor for events after TP2. Occurrence of a previous event, mean heart rate, mean pulse rates before TP2 (and their interaction), and 15-minute slopes in heart rate and blood pressure before TP2 were the strongest predictors for events after TP3. The best performing machine-learning models including all cases produced an AUC of 0.81 (TP1), 0.87 (TP2), and 0.89 (TP3) with positive predictive values of 0.30, 0.29, and 0.15 at 95% specificity, respectively. CONCLUSIONS: We developed models to predict unstable bradycardia leveraging preoperative and real-time intraoperative data. Our study demonstrates how predictive models may be utilized to predict clinical events across multiple time intervals, with a future goal of developing real-time, intraoperative, decision support.


Subject(s)
Bradycardia/diagnosis , Hypotension/diagnosis , Machine Learning/trends , Monitoring, Intraoperative/trends , Bradycardia/physiopathology , Forecasting , Humans , Hypotension/physiopathology , Monitoring, Intraoperative/methods , Predictive Value of Tests , Retrospective Studies
14.
Br J Anaesth ; 124(6): 712-717, 2020 06.
Article in English | MEDLINE | ID: mdl-32228867

ABSTRACT

BACKGROUND: Train-of-four twitch monitoring can be performed using palpation of thumb movement, or by the use of a more objective quantitative monitor, such as mechanomyography, acceleromyography, or electromyography. The relative performance of palpation and quantitative monitoring for determination of the train-of-four ratio has been studied extensively, but the relative performance of palpation and quantitative monitors for counting train-of-four twitch responses has not been completely described. METHODS: We compared train-of-four counts by palpation to mechanomyography, acceleromyography (Stimpod™), and electromyography (TwitchView Monitor™) in anaesthetised patients using 1691 pairs of measurements obtained from 46 subjects. RESULTS: There was substantial agreement between palpation and electromyography (kappa = 0.80), mechanomyography (kappa = 0.67), or acceleromyography (kappa = 0.63). Electromyography with TwitchView and mechanomyography most closely resembled palpation, whereas acceleromyography with StimPod often underestimated train-of-four count. With palpation as the comparator, acceleromyography was more likely to measure a lower train-of-four count, with 36% of counts less than palpation, and 3% more than palpation. For mechanomyography, 31% of train-of-four counts were greater than palpation, and 9% were less. For electromyography, 15% of train-of-four counts were greater than palpation, and 12% were less. The agreement between acceleromyography and electromyography was fair (kappa = 0.38). For acceleromyography, 39% of train-of-four counts were less than electromyography, and 5% were more. CONCLUSIONS: Acceleromyography with the StimPod frequently underestimated train-of-four count in comparison with electromyography with TwitchView.


Subject(s)
Accelerometry/methods , Myography/methods , Palpation/methods , Adult , Aged , Electromyography/methods , Female , Humans , Male , Middle Aged , Reproducibility of Results
15.
Anesth Analg ; 130(2): 382-390, 2020 02.
Article in English | MEDLINE | ID: mdl-31306243

ABSTRACT

BACKGROUND: Many hospitals have implemented surgical safety checklists based on the World Health Organization surgical safety checklist, which was associated with improved outcomes. However, the execution of the checklists is frequently incomplete. We reasoned that aviation-style computerized checklist displayed onto large, centrally located screen and operated by the anesthesia provider would improve the performance of surgical safety checklist. METHODS: We performed a prospective before and after observational study to evaluate the effect of a computerized surgical safety checklist system on checklist performance. We created checklist software and translated our 4-part surgical safety checklist from wall poster into an aviation-style computerized format displayed onto a large, centrally located screen and operated by the anesthesia provider. Direct observers recorded performance of the first part of the surgical safety checklist that was initiated before anesthetic induction, including completion of each checklist item, provider participation and distraction level, resistance to use of the checklist, and the time required for checklist completion before and after checklist system implementation. We compared trends of the proportions of cases with 100% surgical safety checklist completion over time between pre- and postintervention periods and assessed for a jump at the start of intervention using segmented logistic regression model while controlling for potential confounding variables. RESULTS: A total of 671 cases were observed before and 547 cases were observed after implementation of the computerized surgical safety checklist system. The proportion of cases in which all of the items of the surgical safety checklist were completed significantly increased from 2.1% to 86.3% after the computerized checklist system implementation (P < .001). Before computerized checklist system implementation, 488 of 671 (72.7%) cases had <75% of checklist items completed, whereas after a computerized checklist system implementation, only 3 of 547 (0.5%) cases had <75% of checklist items completed. CONCLUSIONS: The implementation of a computerized surgical safety checklist system resulted in an improvement in checklist performance.


Subject(s)
Anesthesia/standards , Checklist/standards , Clinical Competence/standards , Health Personnel/standards , Surgical Procedures, Operative/standards , Therapy, Computer-Assisted/standards , Adult , Aged , Anesthesia/methods , Aviation/standards , Checklist/methods , Female , Humans , Male , Middle Aged , Operating Rooms/methods , Operating Rooms/standards , Prospective Studies , Surgical Procedures, Operative/methods , Therapy, Computer-Assisted/methods
16.
Anesthesiology ; 132(3): 461-475, 2020 03.
Article in English | MEDLINE | ID: mdl-31794513

ABSTRACT

BACKGROUND: Despite the significant healthcare impact of acute kidney injury, little is known regarding prevention. Single-center data have implicated hypotension in developing postoperative acute kidney injury. The generalizability of this finding and the interaction between hypotension and baseline patient disease burden remain unknown. The authors sought to determine whether the association between intraoperative hypotension and acute kidney injury varies by preoperative risk. METHODS: Major noncardiac surgical procedures performed on adult patients across eight hospitals between 2008 and 2015 were reviewed. Derivation and validation cohorts were used, and cases were stratified into preoperative risk quartiles based upon comorbidities and surgical procedure. After preoperative risk stratification, associations between intraoperative hypotension and acute kidney injury were analyzed. Hypotension was defined as the lowest mean arterial pressure range achieved for more than 10 min; ranges were defined as absolute (mmHg) or relative (percentage of decrease from baseline). RESULTS: Among 138,021 cases reviewed, 12,431 (9.0%) developed postoperative acute kidney injury. Major risk factors included anemia, estimated glomerular filtration rate, surgery type, American Society of Anesthesiologists Physical Status, and expected anesthesia duration. Using such factors and others for risk stratification, patients with low baseline risk demonstrated no associations between intraoperative hypotension and acute kidney injury. Patients with medium risk demonstrated associations between severe-range intraoperative hypotension (mean arterial pressure less than 50 mmHg) and acute kidney injury (adjusted odds ratio, 2.62; 95% CI, 1.65 to 4.16 in validation cohort). In patients with the highest risk, mild hypotension ranges (mean arterial pressure 55 to 59 mmHg) were associated with acute kidney injury (adjusted odds ratio, 1.34; 95% CI, 1.16 to 1.56). Compared with absolute hypotension, relative hypotension demonstrated weak associations with acute kidney injury not replicable in the validation cohort. CONCLUSIONS: Adult patients undergoing noncardiac surgery demonstrate varying associations with distinct levels of hypotension when stratified by preoperative risk factors. Specific levels of absolute hypotension, but not relative hypotension, are an important independent risk factor for acute kidney injury.


Subject(s)
Acute Kidney Injury/complications , Acute Kidney Injury/epidemiology , Hypotension/complications , Hypotension/epidemiology , Postoperative Complications/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , Anemia/complications , Arterial Pressure , Cohort Studies , Female , Humans , Intraoperative Complications/epidemiology , Male , Middle Aged , Preoperative Period , Retrospective Studies , Risk Assessment , Risk Factors , Treatment Outcome , Young Adult
18.
Methods Inf Med ; 58(2-03): 79-85, 2019 09.
Article in English | MEDLINE | ID: mdl-31398727

ABSTRACT

BACKGROUND: Hyperglycemia or high blood glucose during surgery is associated with poor postoperative outcome. Knowing in advance which patients may develop hyperglycemia allows optimal assignment of resources and earlier initiation of glucose management plan. OBJECTIVE: To develop predictive models to estimate peak glucose levels in surgical patients and to implement the best performing model as a point-of-care clinical tool to assist the surgical team to optimally manage glucose levels. METHODS: Using a large perioperative dataset (6,579 patients) of patient- and surgery-specific parameters, we developed and validated linear regression and machine learning models (random forest, extreme gradient boosting [Xg Boost], classification and regression trees [CART], and neural network) to predict the peak glucose levels during surgery. The model performances were compared in terms of mean absolute percentage error (MAPE), logarithm of the ratio of the predicted to actual value (log ratio), median prediction error, and interquartile error range. The best performing model was implemented as part of a web-based application for optimal decision-making toward glucose management during surgery. RESULTS: Accuracy of the machine learning models were higher (MAPE = 17%, log ratio = 0.029 for Xg Boost) when compared with that of the linear regression model (MAPE = 22%, log ratio = 0.041). The Xg Boost model had the smallest median prediction error (5.4 mg/dL) and the narrowest interquartile error range (-17 to 24 mg/dL) as compared with the other models. The best performing model, Xg Boost, was implemented as a web application, Hyper-G, which the perioperative providers can use at the point of care to estimate peak glucose levels during surgery. CONCLUSIONS: Machine learning models are able to accurately predict peak glucose levels during surgery. Implementation of such a model as a web-based application can facilitate optimal decision-making and advance planning of glucose management strategies.


Subject(s)
Artificial Intelligence , Blood Glucose/analysis , Decision Making , Surgical Procedures, Operative , Data Analysis , Female , Humans , Male , Models, Theoretical , User-Computer Interface
19.
J Am Coll Surg ; 229(4): 346-354.e3, 2019 10.
Article in English | MEDLINE | ID: mdl-31310851

ABSTRACT

BACKGROUND: Accurate estimation of operative case-time duration is critical for optimizing operating room use. Current estimates are inaccurate and earlier models include data not available at the time of scheduling. Our objective was to develop statistical models in a large retrospective data set to improve estimation of case-time duration relative to current standards. STUDY DESIGN: We developed models to predict case-time duration using linear regression and supervised machine learning. For each of these models, we generated an all-inclusive model, service-specific models, and surgeon-specific models. In the latter 2 approaches, individual models were created for each surgical service and surgeon, respectively. Our data set included 46,986 scheduled operations performed at a large academic medical center from January 2014 to December 2017, with 80% used for training and 20% for model testing/validation. Predictions derived from each model were compared with our institutional standard of using average historic procedure times and surgeon estimates. Models were evaluated based on accuracy, overage (case duration > predicted + 10%), underage (case duration < predicted - 10%), and the predictive capability of being within a 10% tolerance threshold. RESULTS: The machine learning algorithm resulted in the highest predictive capability. The surgeon-specific model was superior to the service-specific model, with higher accuracy, lower percentage of overage and underage, and higher percentage of cases within the 10% threshold. The ability to predict cases within 10% improved from 32% using our institutional standard to 39% with the machine learning surgeon-specific model. CONCLUSIONS: Our study is a notable advancement toward statistical modeling of case-time duration across all surgical departments in a large tertiary medical center. Machine learning approaches can improve case duration estimations, enabling improved operating room scheduling, efficiency, and reduced costs.


Subject(s)
Efficiency, Organizational , Machine Learning , Models, Organizational , Operating Rooms/organization & administration , Operative Time , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Linear Models , Male , Middle Aged , Retrospective Studies , Young Adult
20.
Otolaryngol Head Neck Surg ; 161(5): 787-795, 2019 11.
Article in English | MEDLINE | ID: mdl-31335269

ABSTRACT

OBJECTIVE: To examine if attending surgeon presence at the preinduction briefing is associated with a shorter time to incision. STUDY DESIGN: Retrospective cohort study and survey. SETTING: Tertiary academic medical center. SUBJECTS AND METHODS: A retrospective cohort study was conducted of 22,857 operations by 141 attending surgeons across 12 specialties between August 3, 2016, and June 21, 2018. The independent variable was attending surgeon presence at the preinduction briefing. Linear regression models compared time from room entry to incision overall, by service line, and by surgeon. We hypothesized a shorter time to incision when the attending surgeon was present and a larger effect for cases with complex surgical equipment or positioning. A survey was administered to evaluate attending surgeons' perceptions of the briefing, with a response rate of 68% (64 of 94 attending surgeons). RESULTS: Cases for which the attending surgeon was present at the preinduction briefing had a statistically significant yet operationally minor reduction in mean time to incision when compared with cases when the attending surgeon was absent. After covariate adjustment, the mean time to incision was associated with an efficiency gain of 1.8 ± 0.5 minutes (mean ± SD; P < .001). There were no statistically significant differences in the subgroups of complex surgical equipment and complex positioning or in secondary analysis comparing service lines. The surgeon was the strongest confounding variable. Survey results demonstrated mild support: 55% of attending surgeons highly prioritized attending the preinduction briefing. CONCLUSION: Attending surgeon presence at the preinduction briefing has only a minor effect on efficiency as measured by time to incision.


Subject(s)
Efficiency , Operating Rooms , Otorhinolaryngologic Surgical Procedures , Physician's Role , Preoperative Period , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Operative Time , Retrospective Studies , Surgeons , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...