ABSTRACT
Commercial aviation practices including the role of the pilot monitoring, the sterile flight deck rule, and computerised checklists have direct applicability to anaesthesia care. The pilot monitoring performs specific tasks that complement the pilot flying who is directly controlling the aircraft flight path. The anaesthesia care team, with two providers, can be organised in a manner that is analogous to the two-pilot flight deck. However, solo providers, such as solo pilots, can emulate the pilot monitoring role by reading checklists aloud, and utilise non-anaesthesia providers to fulfil some of the functions of pilot monitoring. The sterile flight deck rule states that flight crew members should not engage in any non-essential or distracting activity during critical phases of flight. The application of the sterile flight deck rule in anaesthesia practice entails deliberately minimising distractions during critical phases of anaesthesia care. Checklists are commonly used in the operating room, especially the World Health Organization surgical safety checklist. However, the use of aviation-style computerised checklists offers additional benefits. Here we discuss how these commercial aviation practices may be applied in the operating room.
Subject(s)
Anesthesia , Anesthesiology , Aviation , Humans , Checklist , Operating Rooms , AircraftABSTRACT
Machine Learning (ML) models have been developed to predict perioperative clinical parameters. The objective of this study was to determine if ML models can serve as decision aids to improve anesthesiologists' prediction of peak intraoperative glucose values and postoperative opioid requirements. A web-based tool was used to present actual surgical case and patient information to 10 practicing anesthesiologists. They were asked to predict peak glucose levels and post-operative opioid requirements for 100 surgical patients with and without presenting ML model estimations of peak glucose and opioid requirements. The accuracies of the anesthesiologists' estimates with and without ML estimates as reference were compared. A questionnaire was also sent to the participating anesthesiologists to obtain their feedback on ML decision support. The accuracy of peak glucose level estimates by the anesthesiologists increased from 79.0 ± 13.7% without ML assistance to 84.7 ± 11.5% (< 0.001) when ML estimates were provided as reference. The accuracy of opioid requirement estimates increased from 18% without ML assistance to 42% (p < 0.001) when ML estimates were provided as reference. When ML estimates were provided, predictions of peak glucose improved for 8 out of the 10 anesthesiologists, while predictions of opioid requirements improved for 7 of the 10 anesthesiologists. Feedback questionnaire responses revealed that the anesthesiologist primarily used the ML estimates as reference to modify their clinical judgement. ML models can improve anesthesiologists' estimation of clinical parameters. ML predictions primarily served as reference information that modified an anesthesiologist's clinical estimate.
Subject(s)
Analgesics, Opioid , Anesthesiologists , Humans , Analgesics, Opioid/therapeutic use , Machine Learning , Glucose , Decision Support TechniquesABSTRACT
Critical patient care information is often omitted or misunderstood during handoffs, which can lead to inefficiencies, delays, and sometimes patient harm. We implemented an aviation-style post-anesthesia care unit (PACU) handoff checklist displayed on a tablet computer to improve PACU handoff communication. We developed an aviation-style computerized checklist system for use in procedural rooms and adapted it for tablet computers to facilitate the performance of PACU handoffs. We then compared the proportion of PACU handoff items communicated before and after the implementation of the PACU handoff checklist on a tablet computer. A trained observer recorded the proportion of PACU handoff information items communicated, any resistance during the performance of the checklist, the type of provider participating in the handoff, and the time required to perform the handoff. We also obtained these patient outcomes: PACU length of stay, respiratory events, post-operative nausea and vomiting, and pain. A total of 209 PACU handoffs were observed before and 210 after the implementation of the tablet-based PACU handoff checklist. The average proportion of PACU handoff items communicated increased from 49.3% (95% CI 47.7-51.0%) before checklist implementation to 72.0% (95% CI 69.2-74.9%) after checklist implementation (p < 0.001). A tablet-based aviation-style handoff checklist resulted in an increase in PACU handoff items communicated, but did not have an effect on patient outcomes.
Subject(s)
Anesthesia , Aviation , Patient Handoff , Checklist , Communication , Computers, Handheld , HumansABSTRACT
BACKGROUND: Many hospitals have implemented surgical safety checklists based on the World Health Organization surgical safety checklist, which was associated with improved outcomes. However, the execution of the checklists is frequently incomplete. We reasoned that aviation-style computerized checklist displayed onto large, centrally located screen and operated by the anesthesia provider would improve the performance of surgical safety checklist. METHODS: We performed a prospective before and after observational study to evaluate the effect of a computerized surgical safety checklist system on checklist performance. We created checklist software and translated our 4-part surgical safety checklist from wall poster into an aviation-style computerized format displayed onto a large, centrally located screen and operated by the anesthesia provider. Direct observers recorded performance of the first part of the surgical safety checklist that was initiated before anesthetic induction, including completion of each checklist item, provider participation and distraction level, resistance to use of the checklist, and the time required for checklist completion before and after checklist system implementation. We compared trends of the proportions of cases with 100% surgical safety checklist completion over time between pre- and postintervention periods and assessed for a jump at the start of intervention using segmented logistic regression model while controlling for potential confounding variables. RESULTS: A total of 671 cases were observed before and 547 cases were observed after implementation of the computerized surgical safety checklist system. The proportion of cases in which all of the items of the surgical safety checklist were completed significantly increased from 2.1% to 86.3% after the computerized checklist system implementation (P < .001). Before computerized checklist system implementation, 488 of 671 (72.7%) cases had <75% of checklist items completed, whereas after a computerized checklist system implementation, only 3 of 547 (0.5%) cases had <75% of checklist items completed. CONCLUSIONS: The implementation of a computerized surgical safety checklist system resulted in an improvement in checklist performance.
Subject(s)
Anesthesia/standards , Checklist/standards , Clinical Competence/standards , Health Personnel/standards , Surgical Procedures, Operative/standards , Therapy, Computer-Assisted/standards , Adult , Aged , Anesthesia/methods , Aviation/standards , Checklist/methods , Female , Humans , Male , Middle Aged , Operating Rooms/methods , Operating Rooms/standards , Prospective Studies , Surgical Procedures, Operative/methods , Therapy, Computer-Assisted/methodsABSTRACT
BACKGROUND: Predictive analytics systems may improve perioperative care by enhancing preparation for, recognition of, and response to high-risk clinical events. Bradycardia is a fairly common and unpredictable clinical event with many causes; it may be benign or become associated with hypotension requiring aggressive treatment. Our aim was to build models to predict the occurrence of clinically significant intraoperative bradycardia at 3 time points during an operative course by utilizing available preoperative electronic medical record and intraoperative anesthesia information management system data. METHODS: The analyzed data include 62,182 scheduled noncardiac procedures performed at the University of Washington Medical Center between 2012 and 2017. The clinical event was defined as severe bradycardia (heart rate <50 beats per minute) followed by hypotension (mean arterial pressure <55 mm Hg) within a 10-minute window. We developed models to predict the presence of at least 1 event following 3 time points: induction of anesthesia (TP1), start of the procedure (TP2), and 30 minutes after the start of the procedure (TP3). Predictor variables were based on data available before each time point and included preoperative patient and procedure data (TP1), followed by intraoperative minute-to-minute patient monitor, ventilator, intravenous fluid, infusion, and bolus medication data (TP2 and TP3). Machine-learning and logistic regression models were developed, and their predictive abilities were evaluated using the area under the ROC curve (AUC). The contribution of the input variables to the models were evaluated. RESULTS: The number of events was 3498 (5.6%) after TP1, 2404 (3.9%) after TP2, and 1066 (1.7%) after TP3. Heart rate was the strongest predictor for events after TP1. Occurrence of a previous event, mean heart rate, and mean pulse rates before TP2 were the strongest predictor for events after TP2. Occurrence of a previous event, mean heart rate, mean pulse rates before TP2 (and their interaction), and 15-minute slopes in heart rate and blood pressure before TP2 were the strongest predictors for events after TP3. The best performing machine-learning models including all cases produced an AUC of 0.81 (TP1), 0.87 (TP2), and 0.89 (TP3) with positive predictive values of 0.30, 0.29, and 0.15 at 95% specificity, respectively. CONCLUSIONS: We developed models to predict unstable bradycardia leveraging preoperative and real-time intraoperative data. Our study demonstrates how predictive models may be utilized to predict clinical events across multiple time intervals, with a future goal of developing real-time, intraoperative, decision support.
Subject(s)
Bradycardia/diagnosis , Hypotension/diagnosis , Machine Learning/trends , Monitoring, Intraoperative/trends , Bradycardia/physiopathology , Forecasting , Humans , Hypotension/physiopathology , Monitoring, Intraoperative/methods , Predictive Value of Tests , Retrospective StudiesABSTRACT
BACKGROUND: Climacteric symptoms are a variety of disturbing complaints occurring during menopausal transition, many of which may be influenced by hormonal abnormalities other than related to sex steroids. AIM OF THE STUDY: In this study, we investigated the association between the intensity of climacteric symptoms measured with the Kupperman index and a thyroid status. MATERIAL AND METHODS: We evaluated by measuring serum thyrotropin (TSH), and free thyroxine (fT4) 202 euthyroid women admitted to the Department of Gynecological Endocrinology, Poznan University of Medical Sciences because of climacteric symptoms. Patients were both in perimenopause (n = 74) and postmenopause (n = 128), with no history of thyroid disorders. RESULTS: Results presented as the mean value and standard deviation were as follows: age 54.2 ± 4.9 years, BMI 26.8 ± 4.6 kg/m2, Kupperman index 26 ± 13.1 points, TSH 2.4 ± 2.6 mU/l, fT4 1.2 ± 0.37 ng/dl. We observed a negative correlation between fT4 and the time since the last menses (R = - 0.38; p = 0.02) as well as between serum TSH concentration and sweating (R = - 0.18; p = 0.03), general weakness (R = - 0.17; p = 0.03), and palpitation (R = - 0.18; p = 0.02) and a positive correlation between fT4 and nervousness (R = 0.34; p = 0.007) and palpitations (R = 0.25; p = 0.04). In the perimenopausal subgroup, there was a positive correlation between fT4 and general weakness (R = 0.42; p = 0.03), palpitations (R = 0.50; p = 0.009), and paresthesia (R = 0.46; p = 0.01). In the postmenopausal subgroup, there was a negative correlation between TSH and sweating (R = - 0.21; p = 0.03). CONCLUSIONS: Menopausal symptoms are related to thyroid status in euthyroid menopausal women.
Subject(s)
Climacteric , Menopause/blood , Thyroid Diseases/blood , Thyroid Gland/metabolism , Thyrotropin/blood , Thyroxine/blood , Triiodothyronine/blood , Adult , Aged , Biomarkers/analysis , Female , Follow-Up Studies , Humans , Middle Aged , Prognosis , Retrospective Studies , Thyroid Diseases/pathologyABSTRACT
BACKGROUND: Recent studies have found that kisspeptin/neurokinin B/dynorphin neurons (KNDy neurons) in the infundibular nucleus play a crucial role in the reproductive axis. Analogs, both agonists and antagonists, of kisspeptin and neurokinin B (NKB) are particularly important in explaining the physiological role of KNDy in the reproductive axis in animals. The use of kisspeptin and NKB analogs has helped elucidate the regulators of the hypothalamic reproductive axis. PURPOSE: This review describes therapeutic uses of Kiss-1 and NKB agonists, most obviously the use of kisspeptin agonists in the treatment for infertility and the induction of ovulation. Kisspeptin antagonists may have potential clinical applications in patients suffering from diseases associated with enhanced LH pulse frequency, such as polycystic ovary syndrome or menopause. The inhibition of pubertal development using Kiss antagonists may be used as a therapeutic option in precocious puberty. Kisspeptin antagonists have been found capable of inhibiting ovulation and have been proposed as novel contraceptives. Hypothalamic amenorrhea and delayed puberty are conditions in which normalization of LH secretion may potentially be achieved by treatment with both kisspeptin and NKB agonists. NKB antagonists are used to treat vasomotor symptoms in postmenopausal women, providing rapid relief of symptoms while supplanting the need for exogenous estrogen exposure. CONCLUSIONS: There is a wide spectrum of therapeutic uses of Kiss-1 and NKB agonists, including the management of infertility, treatment for PCOS, functional hypothalamic amenorrhea or postmenopausal vasomotor symptoms, as well as contraceptive issues. Nevertheless, further research is needed before kisspeptin and NKB analogs are fully incorporated in clinical practice.
Subject(s)
Infertility, Female/drug therapy , Kisspeptins/agonists , Neurokinin B/agonists , Ovulation Induction/methods , Puberty/drug effects , Female , Humans , Kisspeptins/metabolism , Neurokinin B/metabolism , Neurons/drug effects , Neurons/metabolism , Puberty/metabolismABSTRACT
BACKGROUND: Although intraoperative epidural analgesia improves postoperative pain control, a recent quality improvement project demonstrated that only 59% of epidural infusions are started in the operating room before patient arrival in the postanesthesia care unit. We evaluated the combined effect of process and digital quality improvement efforts on provider compliance with starting continuous epidural infusions during surgery. METHODS: In October 2014, we instituted 2 process improvement initiatives: (1) an electronic order queue to assist the operating room pharmacy with infusate preparation; and (2) a designated workspace for the storage of equipment related to epidural catheter placement and drug infusion delivery. In addition, we implemented a digital quality improvement initiative, an Anesthesia Information Management System-mediated clinical decision support, to prompt anesthesia providers to start and document epidural infusions in pertinent patients. We assessed anesthesia provider compliance with epidural infusion initiation in the operating room and postoperative pain-related outcomes before (PRE: October 1, 2012 to September 31, 2014) and after (POST: January 1, 2015 to December 31, 2016) implementation of the quality improvement initiatives. RESULTS: Compliance with starting intraoperative epidural infusions was 59% in the PRE group and 85% in the POST group. After adjustment for confounders and preintervention time trends, segmented regression analysis demonstrated a statistically significant increase in compliance with the intervention in the POST phase (odds ratio, 2.78; 95% confidence interval, 1.73-4.49; P < .001). In the PRE and POST groups, cumulative postoperative intravenous opioid use (geometric mean) was 62 and 34 mg oral morphine equivalents, respectively. A segmented regression analysis did not demonstrate a statistically significant difference (P = .38) after adjustment for preintervention time trends. CONCLUSIONS: Process workflow optimization along with Anesthesia Information Management System-mediated digital quality improvement efforts increased compliance to intraoperative epidural infusion initiation. Adjusted for preintervention time trends, these findings coincided with a statistically insignificant decrease in postoperative opioid use in the postanesthesia care unit during the POST phase.
Subject(s)
Anesthesia, Epidural/standards , Outcome and Process Assessment, Health Care , Pain Management/standards , Pain, Postoperative/therapy , Quality Improvement , Adult , Aged , Analgesia, Epidural , Analgesics, Opioid/administration & dosage , Anesthetics, Local/administration & dosage , Female , Humans , Infusions, Intravenous , Intraoperative Period , Male , Middle Aged , Operating Rooms , Pain Measurement , Regression Analysis , Treatment OutcomeABSTRACT
BACKGROUND: Traumatic brain injury anesthesia care is complex. The use of clinical decision support to improve pediatric trauma care has not been examined. AIMS: The aim of this study was to examine feasibility, reliability, and key performance indicators for traumatic brain injury anesthesia care using clinical decision support. METHODS: Clinical decision support was activated for patients under 19 years undergoing craniotomy for suspected traumatic brain injury. Anesthesia providers were prompted to adhere to process measures via on-screen alerts and notified in real time of abnormal monitor data or laboratory results (unwanted key performance indicator events). Process measures pertained to arterial line placement and blood gas draws, neuromuscular blockade, hypotension, anemia, coagulopathy, hyperglycemia, and intracranial hypertension. Unwanted key performance indicators were: hypotension, hypoxia, hypocarbia, hypercarbia, hypothermia, hyperthermia, anesthetic agent overdose; hypoxemia, coagulopathy, anemia, and hyperglycemia. Anesthesia records, vital signs, and alert logs were reviewed for 39 anesthetic cases (19 without clinical decision support and 20 with clinical decision support). RESULTS: Data from 35 patients aged 11 months to 17 years and 77% males were examined. Clinical decision support reliably identified 39/46 eligible anesthetic cases, with 85% sensitivity and 100% specificity, and was highly sensitive, detecting 89% of monitor key performance indicator events and 100% of reported lab key performance indicator events. There were no false positive alerts. Median event duration was lower in the "with clinical decision support" group for 4/7 key performance indicators. Second insult duration was lower for duration of hypocarbia (by 44%), hypotension (29%), hypothermia (12%), and hyperthermia (15%). CONCLUSION: Use of clinical decision support in pediatric traumatic brain injury anesthesia care is feasible, reliable, and may have the potential to improve key performance indicator outcomes. This observational study suggests the possibility of clinical decision support as a strategy to reduce second insults and improve traumatic brain injury guideline adherence during pediatric anesthesia care.
Subject(s)
Anesthesia/methods , Brain Injuries, Traumatic/surgery , Decision Support Systems, Clinical , Anesthesia/standards , Brain Injuries, Traumatic/physiopathology , Child , Feasibility Studies , Female , Humans , Male , Retrospective StudiesABSTRACT
With increasing adoption of anesthesia information management systems (AIMS), there is growing interest in utilizing AIMS data for intraoperative clinical decision support (CDS). CDS for anesthesia has the potential for improving quality of care, patient safety, billing, and compliance. Intraoperative CDS can range from passive and post hoc systems to active real-time systems that can detect ongoing clinical issues and deviations from best practice care. Real-time CDS holds the most promise because real-time alerts and guidance can drive provider behavior toward evidence-based standardized care during the ongoing case. In this review, we describe the different types of intraoperative CDS systems with specific emphasis on real-time systems. The technical considerations in developing and implementing real-time CDS are systematically covered. This includes the functional modules of a CDS system, development and execution of decision rules, and modalities to alert anesthesia providers concerning clinical issues. We also describe the regulatory aspects that affect development, implementation, and use of intraoperative CDS. Methods and measures to assess the effectiveness of intraoperative CDS are discussed. Last, we outline areas of future development of intraoperative CDS, particularly the possibility of providing predictive and prescriptive decision support.
Subject(s)
Anesthesia/standards , Anesthesiology/standards , Decision Support Systems, Clinical , Computer Systems , Decision Support Systems, Clinical/standards , Humans , Intraoperative CareABSTRACT
BACKGROUND: The objective of this study was to assess the relationship between exposure to methylprednisolone (MP) and improvements in motor function among patients with acute traumatic spinal cord injury (TSCI). MP therapy for patients with TSCI is controversial because of the current conflicting evidence documenting its benefits and risks. METHODS: We conducted a retrospective cohort study from September 2007 to November 2014 of 311 patients with acute TSCI who were enrolled into a model systems database of a regional, level I trauma center. We linked outcomes and covariate data from the model systems database with MP exposure data from the electronic medical record. The primary outcomes were rehabilitation discharge in American Spinal Injury Association (ASIA) motor scores (sum of 10 key muscles bilaterally as per International Standards for Neurological Classification of Spinal Cord Injury, range, 0-100) and Functional Independence Measure (FIM) motor scores (range, 13-91). Secondary outcomes measured infection risk and gastrointestinal (GI) complications among MP recipients. For the primary outcomes, multivariable linear regression was used. RESULTS: There were 160 MP recipients and 151 nonrecipients. Adjusting for age, sex, weight, race, respective baseline motor score, surgical intervention, injury level, ASIA Impairment Scale (AIS) grade, education, and insurance status, there was no association with improvement in discharge ASIA motor function or FIM motor score among MP recipients: -0.34 (95% CI, -2.8, 2.1) and 0.75 (95% CI, -2.8, 4.3), respectively. Adjusting for age, sex, race, weight, injury level, and receipt of surgery, no association with increased risk of infection or GI complications was observed. CONCLUSIONS: This retrospective cohort study involving patients with acute TSCI observed no short-term improvements in motor function among MP recipients compared with nonrecipients. Our findings support current recommendations that MP use in this population should be limited.
Subject(s)
Anti-Inflammatory Agents/therapeutic use , Databases, Factual , Methylprednisolone/therapeutic use , Recovery of Function/physiology , Spinal Cord Injuries/diagnosis , Spinal Cord Injuries/drug therapy , Adult , Anti-Inflammatory Agents/pharmacology , Cohort Studies , Female , Humans , Linear Models , Male , Methylprednisolone/pharmacology , Middle Aged , Recovery of Function/drug effects , Retrospective Studies , Spinal Cord Injuries/physiopathologyABSTRACT
BACKGROUND: Current guidelines recommend routine clamping of external ventricular drains (EVD) for intrahospital transport (IHT). The aim of this project was to describe intracranial hemodynamic complications associated with routine EVD clamping for IHT in neurocritically ill cerebrovascular patients. METHODS: We conducted a retrospective review of cerebrovascular adult patients with indwelling EVD admitted to the neurocritical care unit (NICU) during the months of September to December 2015 at a tertiary care center. All IHTs from the NICU of the included patients were examined. Main outcomes were incidence and risk factors for an alteration in intracranial pressure (ICP) and cerebral perfusion pressure after IHT. RESULTS: Nineteen cerebrovascular patients underwent 178 IHTs (79.8 % diagnostic and 20.2 % therapeutic) with clamped EVD. Twenty-one IHTs (11.8 %) were associated with post-IHT ICP ≥ 20 mmHg, and 33 IHTs (18.5 %) were associated with escalation of ICP category. Forty IHTs (26.7 %) in patients with open EVD status in the NICU prior to IHT were associated with IHT complications, whereas no IHT complications occurred in IHTs with clamped EVD status in the NICU. Risk factors for post-IHT ICP ≥ 20 mmHg were IHT for therapeutic procedures (adjusted relative risk [aRR] 5.82; 95 % CI, 1.76-19.19), pre-IHT ICP 15-19 mmHg (aRR 3.40; 95 % CI, 1.08-10.76), pre-IHT ICP ≥ 20 mmHg (aRR 12.94; 95 % CI, 4.08-41.01), and each 1 mL of hourly cerebrospinal fluid (CSF) drained prior to IHT (aRR 1.11; 95 % CI, 1.01-1.23). CONCLUSIONS: Routine clamping of EVD for IHT in cerebrovascular patients is associated with post-IHT ICP complications. Pre-IHT ICP ≥ 15 mmHg, increasing hourly CSF output, and IHT for therapeutic procedures are risk factors.
Subject(s)
Catheters, Indwelling , Cerebrovascular Circulation , Critical Illness/therapy , Drainage/methods , Intracranial Hemorrhages/therapy , Intracranial Pressure , Transportation of Patients/methods , Ventriculostomy/methods , Aged , Female , Humans , Male , Middle Aged , Retrospective Studies , Risk FactorsABSTRACT
PURPOSE: Continuous intraoperative epidural analgesia may improve post-operative pain control and decrease opioid requirements. We investigate the effect of epidural infusion initiation before or after arrival in the post-anesthesia care unit on recovery room duration and post-operative opioid use. METHODS: We performed a retrospective chart review of abdominal, thoracic and orthopedic surgeries where an epidural catheter was placed prior to surgery at the University of Washington Medical Center during a 24 month period. RESULTS: Patients whose epidural infusions were started prior to PACU arrival (Group 2: n = 540) exhibited a shorter PACU length of stay (p = .004) and were less likely to receive intravenous opioids in the recovery room (34 vs. 48%; p < .001) compared to patients whose infusions were started after surgery (Group 1: n = 374). Although the highest patient-reported pain scores were lower in Group 2 (5.3 vs. 6.0; p = .030), no differences in the pain scores prior to PACU discharge were observed. CONCLUSION: Intraoperative continuous epidural infusions decrease PACU LOS as discharge criteria for patient-reported NRS pain scores are met earlier.
Subject(s)
Analgesia, Epidural/methods , Analgesics/administration & dosage , Anesthesia, Epidural/methods , Pain, Postoperative/drug therapy , Adult , Aged , Analgesics/therapeutic use , Analgesics, Opioid/therapeutic use , Female , Humans , Length of Stay , Male , Middle Aged , Pain Measurement , Recovery Room , Retrospective StudiesABSTRACT
BACKGROUND: Postoperative hyperglycemia has been associated with poor surgical outcome. The effect of intraoperative glucose management on postoperative glucose levels and the optimal glycemic threshold for initiating insulin are currently unknown. METHODS: We performed a retrospective cohort study of surgery patients who required intraoperative glucose management with data extracted from electronic medical records. In patients who required glucose management, intraoperative glucose levels and insulin therapy were compared against postoperative glucose levels during 3 periods: first postoperative level within 1 hour, within the first 12 hours, and 24 hours of the postoperative period. Logistic regression models that adjusted for patient and surgical factors were used to determine the association between intraoperative glucose management and postoperative glucose levels. RESULTS: In 2440 patients who required intraoperative glucose management, an increase in mean intraoperative glucose level by 10 mg/dL was associated with an increase in postoperative glucose levels by 4.7 mg/dL (confidence interval [CI], 4.1-5.3; P < 0.001) for the first postoperative glucose measurement, 2.6 mg/dL (CI, 2.1-3.1; P < 0.001) for the mean first 12-hour postoperative glucose, and 2.4 mg/dL (CI, 2.0-2.9; P < 0.001) for the mean first 24-hour postoperative glucose levels (univariate analysis). Multivariate analysis showed that these effects depended on (interacted with) body mass index and diabetes status of the patient. Both diabetes status (regression coefficient = 12.2; P < 0.001) and intraoperative steroid use (regression coefficient = 10.2; P < 0.001) had a positive effect on elevated postoperative glucose levels. Intraoperative hyperglycemia (>180 mg/dL) was associated with postoperative hyperglycemia during the first 12 hours and the first 24 hours. However, interaction with procedure duration meant that this association was stronger for shorter surgeries. When compared with starting insulin for an intraoperative glucose threshold of 140 mg/dL thus avoiding hyperglycemia, initiation of insulin for a hyperglycemia threshold of 180 mg/dL was associated with an increase in postoperative glucose level (7 mg/dL; P < 0.001) and postoperative hyperglycemia incidence (odds ratio = 1.53; P = 0.01). CONCLUSIONS: A higher intraoperative glucose level is associated with a higher postoperative glucose level. Intraoperative hyperglycemia increases the odds for postoperative hyperglycemia. Adequate intraoperative glucose management by initiating insulin infusion when glucose level exceeds 140 mg/dL to prevent hyperglycemia is associated with lower postoperative glucose levels and fewer incidences of postoperative hyperglycemia. However, patient- and procedure-specific variable interactions make the relationship between intraoperative and postoperative glucose levels complicated.
Subject(s)
Blood Glucose/metabolism , Hyperglycemia/drug therapy , Hyperglycemia/etiology , Intraoperative Care/methods , Postoperative Care/methods , Surgical Procedures, Operative , Adult , Aged , Aged, 80 and over , Cohort Studies , Diabetes Mellitus/blood , Diabetes Mellitus/drug therapy , Female , Humans , Hyperglycemia/blood , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Male , Middle Aged , Postoperative Complications/blood , Postoperative Complications/drug therapy , Retrospective Studies , Steroids/therapeutic use , Treatment Outcome , Young AdultABSTRACT
Poor perioperative glycemic management can lead to negative surgical outcome. Improved compliance to glucose control protocol could lead to better glucose management. An Anesthesia Information Management System based decision support system-Smart Anesthesia Manager™ (SAM) was used to generate real-time reminders to the anesthesia providers to closely adhere to our institutional glucose management protocol. Compliance to hourly glucose measurements and correct insulin dose adjustments was compared for the baseline period (12 months) without SAM and the intervention period (12 months) with SAM decision support. Additionally, glucose management parameters were compared for the baseline and intervention periods. A total of 1587 cases during baseline and 1997 cases during intervention met the criteria for glucose management (diabetic patients or non-diabetic patients with glucose level >140 mg/dL). Among the intervention cases anesthesia providers chose to use SAM reminders 48.7 % of the time primarily for patients who had diabetes, higher HbA1C or body mass index, while disabling the system for the remaining cases. Compliance to hourly glucose measurement and correct insulin doses increased significantly during the intervention period when compared with the baseline (from 52.6 to 71.2 % and from 13.5 to 24.4 %, respectively). In spite of improved compliance to institutional protocol, the mean glucose levels and other glycemic management parameters did not show significant improvement with SAM reminders. Real-time electronic reminders improved intraoperative compliance to institutional glucose management protocol though glycemic parameters did not improve even when there was greater compliance to the protocol.
Subject(s)
Blood Glucose/metabolism , Decision Support Systems, Clinical , Monitoring, Intraoperative/methods , Adult , Aged , Computer Systems , Diabetes Mellitus/blood , Diabetes Mellitus/drug therapy , Female , Humans , Hyperglycemia/blood , Hyperglycemia/drug therapy , Infusions, Intravenous , Insulin/administration & dosage , Intraoperative Complications/blood , Intraoperative Complications/drug therapy , Male , Middle Aged , Monitoring, Intraoperative/statistics & numerical data , Point-of-Care Systems , Prospective StudiesABSTRACT
Land-use changes since the start of the industrial era account for nearly one-third of the cumulative anthropogenic CO2 emissions. In addition to the greenhouse effect of CO2 emissions, changes in land use also affect climate via changes in surface physical properties such as albedo, evapotranspiration and roughness length. Recent modelling studies suggest that these biophysical components may be comparable with biochemical effects. In regard to climate change, the effects of these two distinct processes may counterbalance one another both regionally and, possibly, globally. In this article, through hypothetical large-scale deforestation simulations using a global climate model, we contrast the implications of afforestation on ameliorating or enhancing anthropogenic contributions from previously converted (agricultural) land surfaces. Based on our review of past studies on this subject, we conclude that the sum of both biophysical and biochemical effects should be assessed when large-scale afforestation is used for countering global warming, and the net effect on global mean temperature change depends on the location of deforestation/afforestation. Further, although biochemical effects trigger global climate change, biophysical effects often cause strong local and regional climate change. The implication of the biophysical effects for adaptation and mitigation of climate change in agriculture and agroforestry sectors is discussed.
Subject(s)
Biochemical Phenomena , Biophysical Phenomena , Conservation of Natural Resources , Models, Theoretical , Agriculture , Atmosphere , Carbon Dioxide/analysis , Climate Change , Crops, Agricultural , Plant Transpiration , TemperatureABSTRACT
BACKGROUND: The inverse relationship between age and dose requirement for potent volatile anesthetics is well established, but the question of whether anesthetic providers consider this relationship in practice remains unanswered. We sought to determine whether there is an association between patient age and the mean dose of volatile anesthetic delivered during maintenance of anesthesia. METHODS: This was a retrospective cross-sectional study of patients receiving a single potent volatile anesthetic at 2 academic hospitals using data recorded in an anesthesia information management system. Multivariate linear models were constructed at each hospital to examine the relationship between age and mean minimum alveolar concentration (MAC) fraction delivered during the maintenance of anesthesia. RESULTS: A total of 7878 cases at the 2 hospitals were included for analysis. For patients aged <65 years, we observed decreasing doses of volatile anesthetics as age increased. Per decade, mean delivered MAC fraction decreased by an estimated 1.8% (95% confidence interval, 1.5-2.2, P < 0.0001), smaller than the 6.7% decrease suggested by previous studies of human anesthetic requirements. At age >65 years, the magnitude of the inverse association between age and MAC fraction was higher (3.8% decrease per decade; 95% confidence interval, 2.9-4.7). CONCLUSIONS: Increasing age is associated with decreased absolute doses of potent volatile anesthetics, an association that seems to strengthen as patients enter the geriatric age range. The observed decreases in absolute anesthetic dose were less than those predicted by previous research and therefore represent an overall increase in "age-adjusted dose" as patients grow older.
Subject(s)
Academic Medical Centers/methods , Aging/drug effects , Anesthetics, Inhalation/administration & dosage , Adult , Aged , Aging/physiology , Cross-Sectional Studies , Dose-Response Relationship, Drug , Female , Humans , Male , Middle Aged , Retrospective StudiesABSTRACT
BACKGROUND: Many anesthetic drug errors result from vial or syringe swaps. Scanning the barcodes on vials before drug preparation, creating syringe labels that include barcodes, and scanning the syringe label barcodes before drug administration may help to prevent errors. In contrast, making syringe labels by hand that comply with the recommendations of regulatory agencies and standards-setting bodies is tedious and time consuming. A computerized system that uses vial barcodes and generates barcoded syringe labels could address both safety issues and labeling recommendations. METHODS: We measured compliance of syringe labels in multiple operating rooms (ORs) with the recommendations of regulatory agencies and standards-setting bodies before and after the introduction of the Codonics Safe Label System (SLS). The Codonics SLS was then combined with Smart Anesthesia Manager software to create an anesthesia barcode drug administration system, which allowed us to measure the rate of scanning syringe label barcodes at the time of drug administration in 2 cardiothoracic ORs before and after introducing a coffee card incentive. Twelve attending cardiothoracic anesthesiologists and the OR satellite pharmacy participated. RESULTS: The use of the Codonics SLS drug labeling system resulted in >75% compliant syringe labels (95% confidence interval, 75%-98%). All syringe labels made using the Codonics SLS system were compliant. The average rate of scanning barcodes on syringe labels using Smart Anesthesia Manager was 25% (730 of 2976) over 13 weeks but increased to 58% (956 of 1645) over 8 weeks after introduction of a simple (coffee card) incentive (P < 0.001). CONCLUSIONS: An anesthesia barcode drug administration system resulted in a moderate rate of scanning syringe label barcodes at the time of drug administration. Further, adaptation of the system will be required to achieve a higher utilization rate.
Subject(s)
Anesthesia Department, Hospital , Anesthesia , Anesthetics/administration & dosage , Drug Labeling/instrumentation , Medication Errors/prevention & control , Medication Systems, Hospital , Pharmacy Service, Hospital , Anesthesia/adverse effects , Anesthesia/methods , Anesthesia/standards , Anesthesia Department, Hospital/methods , Anesthesia Department, Hospital/standards , Anesthetics/adverse effects , Anesthetics/standards , Drug Labeling/methods , Drug Labeling/standards , Equipment Design , Equipment Failure , Guideline Adherence , Humans , Materials Testing , Medication Systems, Hospital/standards , Pharmacy Service, Hospital/methods , Pharmacy Service, Hospital/standards , Practice Guidelines as Topic , Software Design , Treatment OutcomeABSTRACT
BACKGROUND: Intraoperative hypotension and hypertension are associated with adverse clinical outcomes and morbidity. Clinical decision support mediated through an anesthesia information management system (AIMS) has been shown to improve quality of care. We hypothesized that an AIMS-based clinical decision support system could be used to improve management of intraoperative hypotension and hypertension. METHODS: A near real-time AIMS-based decision support module, Smart Anesthesia Manager (SAM), was used to detect selected scenarios contributing to hypotension and hypertension. Specifically, hypotension (systolic blood pressure <80 mm Hg) with a concurrent high concentration (>1.25 minimum alveolar concentration [MAC]) of inhaled drug and hypertension (systolic blood pressure >160 mm Hg) with concurrent phenylephrine infusion were detected, and anesthesia providers were notified via "pop-up" computer screen messages. AIMS data were retrospectively analyzed to evaluate the effect of SAM notification messages on hypotensive and hypertensive episodes. RESULTS: For anesthetic cases 12 months before (N = 16913) and after (N = 17132) institution of SAM messages, the median duration of hypotensive episodes with concurrent high MAC decreased with notifications (Mann Whitney rank sum test, P = 0.031). However, the reduction in the median duration of hypertensive episodes with concurrent phenylephrine infusion was not significant (P = 0.47). The frequency of prolonged episodes that lasted >6 minutes (sampling period of SAM), represented in terms of the number of cases with episodes per 100 surgical cases (or percentage occurrence), declined with notifications for both hypotension with >1.25 MAC inhaled drug episodes (δ = -0.26% [confidence interval, -0.38% to -0.11%], P < 0.001) and hypertension with phenylephrine infusion episodes (δ = -0.92% [confidence interval, -1.79% to -0.04%], P = 0.035). For hypotensive events, the anesthesia providers reduced the inhaled drug concentrations to <1.25 MAC 81% of the time with notifications compared with 59% without notifications (P = 0.003). For hypertensive episodes, although the anesthesia providers' reduction or discontinuation of the phenylephrine infusion increased from 22% to 37% (P = 0.030) with notification messages, the overall response was less consistent than the response to hypotensive episodes. CONCLUSIONS: With automatic acquisition of arterial blood pressure and inhaled drug concentration variables in an AIMS, near real-time notification was effective in reducing the duration and frequency of hypotension with concurrent >1.25 MAC inhaled drug episodes. However, since phenylephrine infusion is manually documented in an AIMS, the impact of notification messages was less pronounced in reducing episodes of hypertension with concurrent phenylephrine infusion. Automated data capture and a higher frequency of data acquisition in an AIMS can improve the effectiveness of an intraoperative clinical decision support system.