Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Cureus ; 15(5): e39534, 2023 May.
Article in English | MEDLINE | ID: mdl-37366460

ABSTRACT

Background Compared to traditional breathing circuits, low-volume anesthesia machines utilize a lower-volume breathing circuit paired with needle injection vaporizers that supply volatile agents into the circuit mainly during inspiration. We aimed to assess whether or not low-volume anesthesia machines, such as the Maquet Flow-i C20 anesthesia workstation (MQ), deliver volatile anesthetics more efficiently than traditional anesthesia machines, such as the GE Aisys CS2 anesthesia machine (GE), and, secondarily, whether this was in a meaningful economic or environmentally conscious way. Methodology Participants enrolled in the study (Institutional Review Board Identifier: 2014-1248) met the following inclusion criteria: 18-65 years old, scheduled for surgery requiring general anesthesia at the University of California Irvine Health, and expected to receive sevoflurane for the duration of the procedure. Exclusion criteria included age <18 years old, a history of chronic obstructive pulmonary disorder, cardiovascular disease, sevoflurane sensitivity, body mass index >30 kg/m2, American Society of Anesthesiologists >2, pregnancy, or surgery scheduled <120 minutes. We calculated the total amount of sevoflurane delivered and consumption rates during induction and maintenance periods and compared the groups using one-sided parametric testing (Student's t-test). There was no suspicion that the low-volume circuit could use more sevoflurane and that the outcome did not answer our research question. One-sided testing allowed for more power to be more certain of smaller differences in our results. Results In total, 103 subjects (MQ: n = 52, GE: n = 51) were analyzed. Seven subjects were lost to attrition of different types. Overall, the MQ group consumed significantly less sevoflurane (95.5 ± 49.3 g) compared to the GE group (118.3 ± 62.4 g) (p = 0.043), corresponding to an approximately 20% efficiency improvement in overall agent delivery. When accounting for the fresh gas flow setting, agent concentration, and length of induction, the MQ delivered the volatile agent at a significantly lower rate compared to the GE (7.4 ± 3.2 L/minute vs. 9.1 ± 4.1 L/minute; p = 0.017). Based on these results, we estimate that the MQ can save an estimated average of $239,440 over the expected 10-year machine lifespan. This 20% decrease in CO2 equivalent emissions corresponds to 201 metric tons less greenhouse gas emissions over a decade compared to the GE, which is equivalent to 491,760 miles driven by an average passenger vehicle or 219,881 pounds of coal burned. Conclusions Overall, our results from this study suggest that the MQ delivers statistically significantly less (~20%) volatile agent during routine elective surgery using a standardized anesthetic protocol and inclusion/exclusion criteria designed to minimize any patient or provider heterogeneity effects on the results. The results demonstrate an opportunity for economic and environmental benefits.

2.
J Clin Monit Comput ; 36(1): 227-237, 2022 02.
Article in English | MEDLINE | ID: mdl-33523353

ABSTRACT

In critically ill and high-risk surgical room patients, an invasive arterial catheter is often inserted to continuously measure arterial pressure (AP). The arterial waveform pressure measurement, however, may be compromised by damping or inappropriate reference placement of the pressure transducer. Clinicians, decision support systems, or closed-loop applications that rely on such information would benefit from the ability to detect error from the waveform alone. In the present study we hypothesized that machine-learning trained algorithms could discriminate three types of transducer error from accurate monitoring with receiver operator characteristic (ROC) curve areas greater than 0.9. After obtaining written consent, patient arterial line waveform data was collected in the operating room in real-time during routine surgery requiring arterial pressure monitoring. Three deliberate error conditions were introduced during monitoring: Damping, Transducer High, and Transducer Low. The waveforms were split up into 10 s clips that were featurized. The data was also either calibrated against the patient's own baseline or left uncalibrated. The data was then split into training and validation sets, and machine-learning algorithms were run in a Monte-Carlo fashion on the training data with variable sized training sets and hyperparameters. The algorithms with the highest balanced accuracy were pruned, then the highest performing algorithm in the training set for each error state (High, Low, Damped) for both calibrated and uncalibrated data was finally tested against the validation set and the ROC and precision-recall curve area-under the curve (AUC) calculated. 38 patients were enrolled in the study with a mean age of 52 ± 15 years. A total of 40 h of monitoring time was recorded with approximately 120,000 heart beats featurized. For all error states, ROC AUCs for algorithm performance on classification of the state were greater than 0.9; when using patient-specific calibrated data AUCs were 0.94, 0.95, and 0.99 for the transducer low, transducer high, and damped conditions respectively. Machine-learning trained algorithms were able to discriminate arterial line transducer error states from the waveform alone with a high degree of accuracy.


Subject(s)
Arterial Pressure , Machine Learning , Adult , Aged , Algorithms , Arteries , Heart Rate , Humans , Middle Aged
3.
J Med Internet Res ; 23(5): e25079, 2021 05 28.
Article in English | MEDLINE | ID: mdl-34047710

ABSTRACT

BACKGROUND: There is a strong demand for an accurate and objective means of assessing acute pain among hospitalized patients to help clinicians provide pain medications at a proper dosage and in a timely manner. Heart rate variability (HRV) comprises changes in the time intervals between consecutive heartbeats, which can be measured through acquisition and interpretation of electrocardiography (ECG) captured from bedside monitors or wearable devices. As increased sympathetic activity affects the HRV, an index of autonomic regulation of heart rate, ultra-short-term HRV analysis can provide a reliable source of information for acute pain monitoring. In this study, widely used HRV time and frequency domain measurements are used in acute pain assessments among postoperative patients. The existing approaches have only focused on stimulated pain in healthy subjects, whereas, to the best of our knowledge, there is no work in the literature building models using real pain data and on postoperative patients. OBJECTIVE: The objective of our study was to develop and evaluate an automatic and adaptable pain assessment algorithm based on ECG features for assessing acute pain in postoperative patients likely experiencing mild to moderate pain. METHODS: The study used a prospective observational design. The sample consisted of 25 patient participants aged 18 to 65 years. In part 1 of the study, a transcutaneous electrical nerve stimulation unit was employed to obtain baseline discomfort thresholds for the patients. In part 2, a multichannel biosignal acquisition device was used as patients were engaging in non-noxious activities. At all times, pain intensity was measured using patient self-reports based on the Numerical Rating Scale. A weak supervision framework was inherited for rapid training data creation. The collected labels were then transformed from 11 intensity levels to 5 intensity levels. Prediction models were developed using 5 different machine learning methods. Mean prediction accuracy was calculated using leave-one-out cross-validation. We compared the performance of these models with the results from a previously published research study. RESULTS: Five different machine learning algorithms were applied to perform a binary classification of baseline (BL) versus 4 distinct pain levels (PL1 through PL4). The highest validation accuracy using 3 time domain HRV features from a BioVid research paper for baseline versus any other pain level was achieved by support vector machine (SVM) with 62.72% (BL vs PL4) to 84.14% (BL vs PL2). Similar results were achieved for the top 8 features based on the Gini index using the SVM method, with an accuracy ranging from 63.86% (BL vs PL4) to 84.79% (BL vs PL2). CONCLUSIONS: We propose a novel pain assessment method for postoperative patients using ECG signal. Weak supervision applied for labeling and feature extraction improves the robustness of the approach. Our results show the viability of using a machine learning algorithm to accurately and objectively assess acute pain among hospitalized patients. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/17783.


Subject(s)
Acute Pain , Wearable Electronic Devices , Acute Pain/diagnosis , Electrocardiography , Humans , Machine Learning , Support Vector Machine
4.
JMIR Mhealth Uhealth ; 9(5): e25258, 2021 05 05.
Article in English | MEDLINE | ID: mdl-33949957

ABSTRACT

BACKGROUND: Accurate, objective pain assessment is required in the health care domain and clinical settings for appropriate pain management. Automated, objective pain detection from physiological data in patients provides valuable information to hospital staff and caregivers to better manage pain, particularly for patients who are unable to self-report. Galvanic skin response (GSR) is one of the physiologic signals that refers to the changes in sweat gland activity, which can identify features of emotional states and anxiety induced by varying pain levels. This study used different statistical features extracted from GSR data collected from postoperative patients to detect their pain intensity. To the best of our knowledge, this is the first work building pain models using postoperative adult patients instead of healthy subjects. OBJECTIVE: The goal of this study was to present an automatic pain assessment tool using GSR signals to predict different pain intensities in noncommunicative, postoperative patients. METHODS: The study was designed to collect biomedical data from postoperative patients reporting moderate to high pain levels. We recruited 25 participants aged 23-89 years. First, a transcutaneous electrical nerve stimulation (TENS) unit was employed to obtain patients' baseline data. In the second part, the Empatica E4 wristband was worn by patients while they were performing low-intensity activities. Patient self-report based on the numeric rating scale (NRS) was used to record pain intensities that were correlated with objectively measured data. The labels were down-sampled from 11 pain levels to 5 different pain intensities, including the baseline. We used 2 different machine learning algorithms to construct the models. The mean decrease impurity method was used to find the top important features for pain prediction and improve the accuracy. We compared our results with a previously published research study to estimate the true performance of our models. RESULTS: Four different binary classification models were constructed using each machine learning algorithm to classify the baseline and other pain intensities (Baseline [BL] vs Pain Level [PL] 1, BL vs PL2, BL vs PL3, and BL vs PL4). Our models achieved higher accuracy for the first 3 pain models than the BioVid paper approach despite the challenges in analyzing real patient data. For BL vs PL1, BL vs PL2, and BL vs PL4, the highest prediction accuracies were achieved when using a random forest classifier (86.0, 70.0, and 61.5, respectively). For BL vs PL3, we achieved an accuracy of 72.1 using a k-nearest-neighbor classifier. CONCLUSIONS: We are the first to propose and validate a pain assessment tool to predict different pain levels in real postoperative adult patients using GSR signals. We also exploited feature selection algorithms to find the top important features related to different pain intensities. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/17783.


Subject(s)
Galvanic Skin Response , Machine Learning , Adult , Aged , Aged, 80 and over , Algorithms , Humans , Middle Aged , Pain , Pain Measurement , Young Adult
5.
Reg Anesth Pain Med ; 46(1): 41-48, 2021 01.
Article in English | MEDLINE | ID: mdl-33106278

ABSTRACT

INTRODUCTION: OnabotulinumtoxinA (OBTA) is approved for treating chronic headaches and migraines in adults, but there is limited scientific literature on the outcomes in pediatric patients. The aim of this study was to determine if subjects treated with OBTA reported a statistically significant improvement in the primary features (frequency, intensity, duration and disability scoring) associated with migraines compared with placebo at follow-up visits. METHODS: After obtaining approval by the appropriate local (HS# 2016-3108) and federal institutions, the principal investigator enrolled candidates aged 8 to 17 years old diagnosed with chronic migraines (at least 6 months), and 15 or more headache days in a 4-week baseline period. This randomized control trial consisted of two phases: double-blind and open-label for the first two and last two sets of treatments, respectively. Subjects were randomly assigned to receive a treatment protocol-155 units at 31 injection sites-in 3-month intervals and follow-up visits every 6 weeks. Non-parametric testing (Wilcoxon signed-rank test) was performed using widely available open-source statistical software ('R'). RESULTS: From February 2017 to November 2018, 17 subjects presented for a screening visit; 15 met eligibility criteria. Subjects that received OBTA reported a statistically significant decrease from the following baseline values compared with placebo 6-week post-treatment compared with placebo: frequency (20 (7 to 17) vs 28 (23 to 28); p=0.038), intensity (5 (3 to 7) vs 7 (5 to 9); p=0.047), and PedMIDAS (Pediatric Migraine Disability Score) (3 (2 to 4) vs 4 (4 to 4); p=0.047). There was no statistically significant difference in the duration (10 (2 to 24) vs 24 (4 to 24); p=0.148) of migraines between the two groups. DISCUSSION: OnabotulinumtoxinA showed a statistically significant decrease in frequency and intensity of migraines compared with placebo. No adverse effects or serious adverse events related to the use of OBTA were reported. In the future, we aim to evaluate the specific nature of migraines, for example, quality/location of pain presented during an initial consult to predict the likelihood of OBTA being a truly effective modality of pain management for pediatric migraineurs. TRIAL REGISTRATION NUMBER: NCT03055767.


Subject(s)
Botulinum Toxins, Type A , Migraine Disorders , Adolescent , Botulinum Toxins, Type A/adverse effects , Child , Cross-Over Studies , Double-Blind Method , Humans , Migraine Disorders/diagnosis , Migraine Disorders/drug therapy , Pain , Treatment Outcome
6.
Pain Physician ; 23(4S): S271-S282, 2020 08.
Article in English | MEDLINE | ID: mdl-32942787

ABSTRACT

BACKGROUND: Burnout has been a commonly discussed issue for the past ten years among physicians and other health care workers. A survey of interventional pain physicians published in 2016 reported high levels of emotional exhaustion, often considered the most taxing aspect of burnout. Job dissatisfaction appeared to be the leading agent in the development of burnout in pain medicine physicians in the United States. The COVID-19 pandemic has drastically affected the entire health care workforce and interventional pain management, with other surgical specialties, has been affected significantly. The COVID-19 pandemic has placed several physical and emotional stressors on interventional pain management physicians and this may lead to increased physician burnout. OBJECTIVE: To assess the presence of burnout specific to COVID-19 pandemic among practicing interventional pain physicians. METHODS: American Society of Interventional Pain Physicians (ASIPP) administered a 32 question survey to their members by contacting them via commercially available online marketing company platform. The survey was completed on www.constantcontact.com. RESULTS: Of 179 surveys sent, 100 responses were obtained. The data from the survey demonstrated that 98% of physician practices were affected by COVID and 91% of physicians felt it had a significant financial impact. Sixty seven percent of the physicians responded that in-house billing was responsible for their increased level of burnout, whereas 73% responded that electronic medical records (EMRs) were one of the causes. Overall, 78% were very concerned. Almost all respondents have been affected with a reduction in interventional procedures. 60% had a negative opinion about the future of their practice, whereas 66% were negative about the entire health care industry. LIMITATIONS: The survey included only a small number of member physicians. Consequently, it may not be generalized for other specialties or even pain medicine. However, it does represent the sentiment and present status of interventional pain management. CONCLUSION: The COVID-19 pandemic has put interventional pain practices throughout the United States under considerable financial and psychological stress. It is essential to quantify the extent of economic loss, offer strategies to actively manage provider practice/wellbeing, and minimize risk to personnel to keep patients safe.


Subject(s)
Burnout, Professional/epidemiology , Coronavirus Infections , Pain Management/psychology , Pandemics , Pneumonia, Viral , Betacoronavirus , COVID-19 , Humans , Job Satisfaction , Middle Aged , Physicians/psychology , SARS-CoV-2 , Stress, Psychological/epidemiology , Stress, Psychological/etiology , Surveys and Questionnaires , United States
7.
JMIR Res Protoc ; 9(7): e17783, 2020 Jul 01.
Article in English | MEDLINE | ID: mdl-32609091

ABSTRACT

BACKGROUND: Assessment of pain is critical to its optimal treatment. There is a high demand for accurate objective pain assessment for effectively optimizing pain management interventions. However, pain is a multivalent, dynamic, and ambiguous phenomenon that is difficult to quantify, particularly when the patient's ability to communicate is limited. The criterion standard of pain intensity assessment is self-reporting. However, this unidimensional model is disparaged for its oversimplification and limited applicability in several vulnerable patient populations. Researchers have attempted to develop objective pain assessment tools through analysis of physiological pain indicators, such as electrocardiography, electromyography, photoplethysmography, and electrodermal activity. However, pain assessment by using only these signals can be unreliable, as various other factors alter these vital signs and the adaptation of vital signs to pain stimulation varies from person to person. Objective pain assessment using behavioral signs such as facial expressions has recently gained attention. OBJECTIVE: Our objective is to further the development and research of a pain assessment tool for use with patients who are likely experiencing mild to moderate pain. We will collect observational data through wearable technologies, measuring facial electromyography, electrocardiography, photoplethysmography, and electrodermal activity. METHODS: This protocol focuses on the second phase of a larger study of multimodal signal acquisition through facial muscle electrical activity, cardiac electrical activity, and electrodermal activity as indicators of pain and for building predictive models. We used state-of-the-art standard sensors to measure bioelectrical electromyographic signals and changes in heart rate, respiratory rate, and oxygen saturation. Based on the results, we further developed the pain assessment tool and reconstituted it with modern wearable sensors, devices, and algorithms. In this second phase, we will test the smart pain assessment tool in communicative patients after elective surgery in the recovery room. RESULTS: Our human research protections application for institutional review board review was approved for this part of the study. We expect to have the pain assessment tool developed and available for further research in early 2021. Preliminary results will be ready for publication during fall 2020. CONCLUSIONS: This study will help to further the development of and research on an objective pain assessment tool for monitoring patients likely experiencing mild to moderate pain. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/17783.

8.
BMC Anesthesiol ; 19(1): 191, 2019 10 27.
Article in English | MEDLINE | ID: mdl-31656163

ABSTRACT

BACKGROUND: Goal Directed Fluid Therapy (GDFT) represents an objective fluid replacement algorithm. The effect of provider variability remains a confounder. Overhydration worsens perioperative morbidity and mortality; therefore, the impact of the calculated NPO deficit prior to the operating room may reach harm. METHODS: A retrospective single-institution study analyzed patients at UC Irvine Medical Center main operating rooms from September 1, 2013 through September 1, 2015 receiving GDFT. The primary study question asked if GDFT suggested different fluid delivery after different NPO periods, while reducing inter-provider variability. We created two patient groups distinguished by 0715 surgical start time or start time after 1200. We analyzed fluid administration totals with either a 1:1 crystalloid to colloid ratio or a 3:1 ratio. We performed direct group-wise testing on total administered volume expressed as total ml, total ml/hr., and total ml/kg/hr. between the first case start (AM) and afternoon case (PM) groups. A linear regression model included all baseline covariates that differed between groups as well as plausible confounding factors for differing fluid needs. Finally, we combined all patients from both groups, and created NPO time to total administered fluid scatterplots to assess the effect of patient-reported NPO time on fluid administration. RESULTS: Whether reported by total administered volume or net fluid volume, and whether we expressed the sum as ml, ml/hr., or ml/kg/hr., the AM group received more fluid on average than the PM group in all cases. In the general linear models, for all significant independent variables evaluated, AM vs PM case start did not reach significance in both cases at p = 0.64 and p = 0.19, respectively. In scatterplots of NPO time to fluid volumes, absolute adjusted and unadjusted R2 values are < 0.01 for each plot, indicating virtually non-existent correlations between uncorrected NPO time and fluid volumes measured. CONCLUSIONS: This study showed NPO periods do not influence a patient's volume status just prior to presentation to the operating room for surgical intervention. We hope this data will influence the practice of providers routinely replacing calculated NPO period volume deficit; particularly with those presenting with later surgical case start times.


Subject(s)
Fluid Therapy/methods , Preoperative Care/methods , Adult , Aged , Algorithms , Colloids/administration & dosage , Crystalloid Solutions/administration & dosage , Fasting/physiology , Female , Fluid Therapy/statistics & numerical data , Goals , Humans , Male , Middle Aged , Preoperative Care/statistics & numerical data , Retrospective Studies , Time Factors
9.
Anaesth Crit Care Pain Med ; 38(1): 69-71, 2019 02.
Article in English | MEDLINE | ID: mdl-30513357

ABSTRACT

Blood pressure management in the operating rooms (OR) and intensive care units (ICU) frequently involves manually titrated vasopressor therapy to an optimal range of mean arterial pressure (MAP). Ideally, changes in vasopressor infusion rates have to quickly follow variations in blood pressure measurements. However, such a tightly controlled feedback loop is difficult to achieve. Few studies have examined blood pressure control when vasopressor therapy is administered manually in OR and ICU patients. We extracted MAP data from 3623 patients (2530 from the ORs and 1093 from the ICU) on vasopressors from our electronic medical records. Coefficient of variation (= standard deviation/mean value) *100) was calculated and the values were additionally categorized into different MAP ranges (MAP < 60 mmHg, 60 < MAP < 80 and MAP > 80 mmHg). There was no statistically significant difference between both centres for MAP across all time points (80 ± 12 vs. 80 ± 16, P = 0.996, 95% CI -6 to 6). The coefficients of variation of MAP were 13.7 ± 5.4% and 18.4 ± 9.8% in the OR and in ICU respectively. Patients on vasopressors spent 48.8% treatment time with a MAP between 60 and 80 mmHg (11.2% time with MAP < 60 mmHg, and 40% with MAP > 80 mmHg). These results provide a reasonable baseline from which to establish whether 'reduced variability' may be achieved with a closed-loop vasopressor administration system.


Subject(s)
Arterial Pressure/drug effects , Critical Care , Vasoconstrictor Agents/administration & dosage , Arterial Pressure/physiology , Blood Pressure Determination , Humans , Hypertension/drug therapy , Hypotension/drug therapy , Intensive Care Units/statistics & numerical data , Operating Rooms/statistics & numerical data , Retrospective Studies , Surgical Procedures, Operative , Time Factors
10.
J Clin Monit Comput ; 33(5): 795-802, 2019 Oct.
Article in English | MEDLINE | ID: mdl-30539349

ABSTRACT

Initial feasibility of a novel closed-loop controller created by our group for closed-loop control of vasopressor infusions has been previously described. In clinical practice, vasopressor potency may be affected by a variety of factors including other pharmacologic agents, organ dysfunction, and vasoplegic states. The purpose of this study was therefore to evaluate the effectiveness of our controller in the face of large variations in drug potency, where 'effective' was defined as convergence on target pressure over time. We hypothesized that the controller would remain effective in the face up to a tenfold variability in drug response. To perform the robustness study, our physiologic simulator was used to create randomized simulated septic patients. 250 simulated patients were managed by the closed-loop in each of 7 norepinephrine responsiveness conditions: 0.1 ×, 0.2 ×, 0.5 ×, 1 ×, 2 ×, 5 ×, and 10 × expected population response to drug dose. Controller performance was evaluated for each level of norepinephrine response using Varvel's criteria as well as time-out-of-target. Median performance error and median absolute performance error were less than 5% in all response levels. Wobble was below 3% and divergence remained negative (i.e. the controller tended to converge towards the target over time) in all norepinephrine response levels, but at the highest response level of 10 × the value approached zero, suggesting the controller may be approaching instability. Response levels of 0.1 × and 0.2 × exhibited significantly higher time-out-of-target in the lower ranges (p < 0.001) compared to the 1 × response level as the controller was slower to correct the initial hypotension. In this simulation study, the closed-loop vasopressor controller remained effective in simulated patients exhibiting 0.1 to 10 × the expected population drug response.


Subject(s)
Computer Simulation , Hypotension/prevention & control , Sepsis/drug therapy , Vasoconstrictor Agents/administration & dosage , Algorithms , Blood Pressure/drug effects , Humans , Monte Carlo Method , Norepinephrine/administration & dosage , Random Allocation , Software
11.
J Child Neurol ; 33(9): 580-586, 2018 08.
Article in English | MEDLINE | ID: mdl-29877131

ABSTRACT

BACKGROUND: The use of onabotulinumtoxin A in the pediatric population has not been evaluated for chronic migraine in a longitudinal study. This retrospective study sought to determine the efficacy and safety of onabotulinumtoxin A in prophylactic treatment of chronic migraine in the pediatric population. METHODS: The authors retrospectively evaluated pediatric patients who had been treated with onabotulinumtoxin A in the outpatient pain clinic for chronic migraine. Demographic data and pre- and posttreatment migraine days (frequency), pain scores (intensity), and duration of migraine episodes were collected from patient records. RESULTS: Ten patients were included. Median pretreatment to posttreatment headache frequency was 15.5 [8, 29.5] to 4 [2, 10] days/month ( P < .0001), durations were 8 [0, 24] to 1 [0, 7] hours ( P = .025), and intensity was 6 [4, 8] to 4 [2, 5] ( P = .0063). No serious adverse events were reported. CONCLUSIONS: This review over a 5-year longitudinal period demonstrates statistically significant improvement from baseline.


Subject(s)
Botulinum Toxins, Type A/therapeutic use , Migraine Disorders/drug therapy , Neuromuscular Agents/therapeutic use , Adolescent , Child , Female , Humans , Longitudinal Studies , Male , Retrospective Studies , Treatment Outcome
12.
J Clin Monit Comput ; 32(1): 5-11, 2018 Feb.
Article in English | MEDLINE | ID: mdl-28124225

ABSTRACT

Blood pressure management is a central concern in critical care patients. For a variety of reasons, titration of vasopressor infusions may be an ideal use-case for computer assistance. Using our previous experience gained in the bench-to-bedside development of a computer-assisted fluid management system, we have developed a novel controller for this purpose. The aim of this preliminary study was to assess the feasibility of using this controller in simulated patients to maintain a target blood pressure in both stable and variable blood-pressure scenarios. We tested the controller in two sets of simulation scenarios: one with stable underlying blood pressure and a second with variable underlying blood pressure. In addition, in the variable phase of the study, we tested infusion-line delays of 8-60 s. The primary outcome for both testing conditions (stable and variable) was % case time in target range. We determined a priori that acceptable performance on the first phase of the protocol would require greater than 95% case-time in-target given the simple nature of the protocol, and for the second phase of the study 80% or greater given the erratic nature of the blood pressure changes taking place. 250 distinct cases for each simulation condition, both managed and unmanaged, were run over 4 days. In the stable hemodynamic conditions, the unmanaged group had an MAP of 57.5 ± 4.6 mmHg and spent only 5.6% of case time in-target. The managed group had an MAP of 70.3 ± 2.6 and spent a total of 99.5% of case time in-target (p < 0.00001 for both comparisons between groups). In the variable hemodynamic conditions, the unmanaged group had an MAP of 53.1 ± 5.0 mmHg and spent 0% of case time in-target. The managed group had an MAP of 70.5 ± 3.2 mmHg (p < 0.00001 compared to unmanaged group) and spent 88.6% of case time in-target (p < 0.00001 compared to unmanaged group), with 6.4% of case time over and 5.1% of case time under target. Increasing infusion lag increased coefficient of variation by about 10% per 15 s of lag (p = 0.001). This study demonstrated that this novel controller for vasopressor administration is able to main a target mean arterial pressure in a simulated physiologic model in the face of random disturbances and infusion-line lag.


Subject(s)
Monitoring, Physiologic/instrumentation , Vasoconstrictor Agents/therapeutic use , Algorithms , Arterial Pressure , Automation , Blood Pressure/physiology , Computer Systems , Critical Care , Equipment Design , Feasibility Studies , Fluid Therapy/methods , Heart Rate/physiology , Hemodynamics , Humans , Intensive Care Units , Monitoring, Physiologic/methods
13.
Open Orthop J ; 10: 505-511, 2016.
Article in English | MEDLINE | ID: mdl-27990189

ABSTRACT

BACKGROUND: A Perioperative Surgical Home (PSH) care model applies a standardized multidisciplinary approach to patient care using evidence-based medicine to modify and improve protocols. Analysis of patient outcome measures, such as postoperative nausea and vomiting (PONV), allows for refinement of existing protocols to improve patient care. We aim to compare the incidence of PONV in patients who underwent primary total joint arthroplasty before and after modification of our PSH pain protocol. METHODS: All total joint replacement PSH (TJR-PSH) patients who underwent primary THA (n=149) or TKA (n=212) in the study period were included. The modified protocol added a single dose of intravenous (IV) ketorolac given in the operating room and oxycodone immediate release orally instead of IV Hydromorphone in the Post Anesthesia Care Unit (PACU). The outcomes were (1) incidence of PONV and (2) average pain score in the PACU. We also examined the effect of primary anesthetic (spinal vs. GA) on these outcomes. The groups were compared using chi-square tests of proportions. RESULTS: The incidence of post-operative nausea in the PACU decreased significantly with the modified protocol (27.4% vs. 38.1%, p=0.0442). There was no difference in PONV based on choice of anesthetic or procedure. Average PACU pain scores did not differ significantly between the two protocols. CONCLUSION: Simple modifications to TJR-PSH multimodal pain management protocol, with decrease in IV narcotic use, resulted in a lower incidence of postoperative nausea, without compromising average PACU pain scores. This report demonstrates the need for continuous monitoring of PSH pathways and implementation of revisions as needed.

14.
World J Orthop ; 7(6): 376-82, 2016 Jun 18.
Article in English | MEDLINE | ID: mdl-27335813

ABSTRACT

AIM: To determine the impact of different characteristics on postoperative outcomes for patients in a joint arthroplasty Perioperative Surgical Home (PSH) program. METHODS: A retrospective review was performed for patients enrolled in a joint arthroplasty PSH program who had undergone primary total hip arthroplasty (THA) and total knee arthroplasty (TKA). Patients were preoperatively stratified based on specific procedure performed, age, gender, body mass index (BMI), American Society of Anesthesiologists Physical Classification System (ASA) score, and Charleston Comorbidity Index (CCI) score. The primary outcome criterion was hospital length of stay (LOS). Secondary criteria including operative room (OR) duration, transfusion rate, Post-Anesthesia Care Unit (PACU) stay, readmission rate, post-operative complications, and discharge disposition. For each outcome, the predictor variables were entered into a generalized linear model with appropriate response and assessed for predictive relationship to the dependent variable. Significance level was set to 0.05. RESULTS: A total of 337 patients, 200 in the TKA cohort and 137 in the THA cohort, were eligible for the study. Nearly two-third of patients were female. Patient age averaged 64 years and preoperative BMI averaged 29 kg/m(2). The majority of patients were ASA score III and CCI score 0. After analysis, ASA score was the only variable predictive for LOS (P = 0.0011) and each increase in ASA score above 2 increased LOS by approximately 0.5 d. ASA score was also the only variable predictive for readmission rate (P = 0.0332). BMI was the only variable predictive for PACU duration (P = 0.0136). Specific procedure performed, age, gender, and CCI score were not predictive for any of the outcome criteria. OR duration, transfusion rate, post-operative complications or discharge disposition were not significantly associated with any of the predictor variables. CONCLUSION: The joint arthroplasty PSH model reduces postoperative outcome variability for patients with different preoperative characteristics and medical comorbidities.

SELECTION OF CITATIONS
SEARCH DETAIL