ABSTRACT
The diagnosis of bloodborne viral infections (viremia) is currently relegated to central laboratories because of the complex procedures required to detect viruses in blood samples. The development of point-of-care diagnostics for viremia would enable patients to receive a diagnosis and begin treatment immediately instead of waiting days for results. Point-of-care systems for viremia have been limited by the challenges of integrating multiple precise steps into a fully automated (i.e., sample-to-answer), compact, low-cost system. We recently reported the development of thermally responsive alkane partitions (TRAPs), which enable the complete automation of diagnostic assays with complex samples. Here we report the use of TRAPs for the sample-to-answer detection of viruses in blood using a low-cost portable device and easily manufacturable cassettes. Specifically, we demonstrate the detection of SARS-CoV-2 in spiked blood samples, and we show that our system detects viremia in COVID-19 patient samples with good agreement to conventional RT-qPCR. We anticipate that our sample-to-answer system can be used to rapidly diagnose SARS-CoV-2 viremia at the point of care, leading to better health outcomes for patients with severe COVID-19 disease, and that our system can be applied to the diagnosis of other life-threatening bloodborne viral diseases, including Hepatitis C and HIV.
Subject(s)
Alkanes , COVID-19 , SARS-CoV-2 , Viremia , Viremia/diagnosis , Viremia/virology , Humans , SARS-CoV-2/isolation & purification , COVID-19/diagnosis , COVID-19/virology , COVID-19/blood , Alkanes/chemistry , Temperature , Point-of-Care Systems , RNA, Viral/analysisABSTRACT
BACKGROUND: Postoperative hematoma after carotid endarterectomy (CEA) is a devastating complication and may be more likely in patients with uncontrolled hypertension and coughing on emergence from anesthesia. We sought to determine if intubation with a nasal endotracheal tube (ETT)-instead of an oral ETT-is associated with "smoother" (i.e., less hemodynamic instability) emergence from general anesthesia for CEA. METHODS: Patients receiving CEA between December 2015 and September 2021 at a single tertiary academic medical center were included. We examined the electronic anesthesia records for 323 patients who underwent CEA during the 6-year study period and recorded consecutive systolic blood pressure (SBP) values during the 10Ā minutes before extubation as a surrogate for "smoothness" of the emergence. RESULTS: Intubation with a nasal ETT, when compared with intubation with an oral ETT, was not associated with any difference in maximum, minimum, average, median, or standard deviation of serial SBP values in the 10Ā minutes before extubation. The average SBP on emergence for patients with an oral ETT was 141Ā mm Hg and with a nasal ETT was 144Ā mm Hg (PĀ =Ā 0.562). The maximum SBP for patients with oral and nasal ETTs were 170Ā mm Hg and 174Ā mm Hg, respectively (PĀ =Ā 0.491). There were also no differences in the qualitative "smoothness" of emergence or in the percentage of patients who required an intravenous dose of 1 or more antihypertensive medications. The incidence of postoperative complications was similar between the 2 groups. CONCLUSIONS: When SBP is used as a surrogate for smoothness of emergence from general anesthesia for CEA, intubation with a nasal ETT was not associated with better hemodynamic stability compared to intubation with an oral ETT.
Subject(s)
Endarterectomy, Carotid , Humans , Endarterectomy, Carotid/adverse effects , Cohort Studies , Treatment Outcome , Intubation, Intratracheal/adverse effects , Anesthesia, General/adverse effectsABSTRACT
Our objective was to characterize platelet surface glycoprotein (GP)Ibα, activated GPIIb-IIIa, and P-selectin levels during and after extracorporeal membrane oxygenation (ECMO). We performed a single center cohort study of 10 adult patients on ECMO for cardiogenic shock. Patients had blood samples drawn on ECMO day 1 or 2, day 3, day 5, and 48-72Ā hours after ECMO decannulation. Platelets from untreated blood samples and samples treated with either adenosine diphosphate (ADP) or thrombin receptor agonist peptide (TRAP) had surface GPIbα, activated GPIIb-IIIa, and P-selectin levels measured using flow cytometry. Platelet surface GPIbα levels varied significantly by time on ECMO (p =Ā .002) and were significantly higher on ECMO day 5 compared to ECMO day 1 (p =Ā .01). GPIbα levels during ECMO did not differ significantly from levels after ECMO decannulation (p =Ā .14). Activated GPIIb-IIIa levels did not change significantly during ECMO, but were significantly higher after ECMO decannulation (p =Ā .04). There were no significant differences in P-selectin levels during ECMO (p =Ā .87) or after ECMO decannulation (p =Ā .41). Platelet surface GPIbα and P-selectin levels were similar during and after ECMO whereas activated GPIIb-IIIa levels were lower during ECMO, particularly in response to TRAP stimulation, potentially contributing to ECMO-induced coagulopathy.
Subject(s)
Extracorporeal Membrane Oxygenation/methods , P-Selectin/metabolism , Platelet Glycoprotein GPIIb-IIIa Complex/metabolism , Platelet Glycoprotein GPIb-IX Complex/metabolism , HumansABSTRACT
BACKGROUND: There are conflicting data on the effects of antipsychotic medications on delirium in patients in the intensive care unit (ICU). METHODS: In a randomized, double-blind, placebo-controlled trial, we assigned patients with acute respiratory failure or shock and hypoactive or hyperactive delirium to receive intravenous boluses of haloperidol (maximum dose, 20 mg daily), ziprasidone (maximum dose, 40 mg daily), or placebo. The volume and dose of a trial drug or placebo was halved or doubled at 12-hour intervals on the basis of the presence or absence of delirium, as detected with the use of the Confusion Assessment Method for the ICU, and of side effects of the intervention. The primary end point was the number of days alive without delirium or coma during the 14-day intervention period. Secondary end points included 30-day and 90-day survival, time to freedom from mechanical ventilation, and time to ICU and hospital discharge. Safety end points included extrapyramidal symptoms and excessive sedation. RESULTS: Written informed consent was obtained from 1183 patients or their authorized representatives. Delirium developed in 566 patients (48%), of whom 89% had hypoactive delirium and 11% had hyperactive delirium. Of the 566 patients, 184 were randomly assigned to receive placebo, 192 to receive haloperidol, and 190 to receive ziprasidone. The median duration of exposure to a trial drug or placebo was 4 days (interquartile range, 3 to 7). The median number of days alive without delirium or coma was 8.5 (95% confidence interval [CI], 5.6 to 9.9) in the placebo group, 7.9 (95% CI, 4.4 to 9.6) in the haloperidol group, and 8.7 (95% CI, 5.9 to 10.0) in the ziprasidone group (P=0.26 for overall effect across trial groups). The use of haloperidol or ziprasidone, as compared with placebo, had no significant effect on the primary end point (odds ratios, 0.88 [95% CI, 0.64 to 1.21] and 1.04 [95% CI, 0.73 to 1.48], respectively). There were no significant between-group differences with respect to the secondary end points or the frequency of extrapyramidal symptoms. CONCLUSIONS: The use of haloperidol or ziprasidone, as compared with placebo, in patients with acute respiratory failure or shock and hypoactive or hyperactive delirium in the ICU did not significantly alter the duration of delirium. (Funded by the National Institutes of Health and the VA Geriatric Research Education and Clinical Center; MIND-USA ClinicalTrials.gov number, NCT01211522 .).
Subject(s)
Antipsychotic Agents/therapeutic use , Critical Illness/psychology , Delirium/drug therapy , Dopamine Antagonists/therapeutic use , Haloperidol/therapeutic use , Piperazines/therapeutic use , Thiazoles/therapeutic use , Aged , Antipsychotic Agents/adverse effects , Critical Illness/mortality , Critical Illness/therapy , Double-Blind Method , Female , Haloperidol/administration & dosage , Haloperidol/adverse effects , Humans , Kaplan-Meier Estimate , Male , Middle Aged , Piperazines/administration & dosage , Piperazines/adverse effects , Respiratory Insufficiency/psychology , Shock/psychology , Thiazoles/administration & dosage , Thiazoles/adverse effects , Treatment FailureABSTRACT
BACKGROUND: Hyperglycemia is associated with mortality after trauma; however, few studies have simultaneously investigated the association of depth of shock and acute hyperglycemia. We evaluated lactate, as a surrogate measure for depth of shock, and glucose levels on mortality following severe blunt trauma. We hypothesize that measurements of both lactate and glucose are associated with mortality when considered simultaneously. METHODS: This is a retrospective cohort study at a single academic trauma center. Inclusion criteria are age 18-89 years, blunt trauma, injury severity score (ISS) ≥15, and transferred from the scene of injury. All serum blood glucose and lactate values were analyzed within the first 24 hours of admission. Multiple metrics of glucose and lactate were calculated: first glucose (Glucadm) and lactate (Lacadm) at hospital admission, mean 24-hour after hospital admission glucose (Gluc24-hMean) and lactate (Lac24-hMean), maximum 24-hour after hospital admission glucose (Gluc24-hMax) and lactate (Lac24-hMax), and time-weighted 24-hour after hospital admission glucose (Gluc24-hTW) and lactate (Lac24-hTW). Primary outcome was in-hospital mortality. Multivariable logistic regression modeling assessed the odds ratio (OR) of mortality, after adjusting for confounding variables. RESULTS: A total of 1439 trauma patients were included. When metrics of both glucose and lactate were analyzed, after adjusting for age, ISS, and admission shock index, only lactate remained significantly associated with mortality: Lacadm (OR, 1.28; 95% confidence interval [CI], 1.13-1.44); Lac24-hMean (OR, 1.86; 95% CI, 1.52-2.28); Lac24-hMax (OR, 1.39; 95% CI, 1.23-1.56); and Lac24-hTW (OR, 1.86; 95% CI, 1.53-2.26). CONCLUSIONS: Lactate is associated with mortality in severely injured blunt trauma patients, after adjusting for injury severity, age, and shock index. However, we did not find evidence for an association of glucose with mortality after adjusting for lactate.
Subject(s)
Blood Glucose/metabolism , Hospital Mortality , Hyperglycemia/blood , Hyperglycemia/mortality , Lactic Acid/blood , Wounds, Nonpenetrating/blood , Wounds, Nonpenetrating/mortality , Adolescent , Adult , Aged , Aged, 80 and over , Biomarkers/blood , Female , Humans , Hyperglycemia/diagnosis , Injury Severity Score , Male , Middle Aged , Patient Admission , Predictive Value of Tests , Prognosis , Retrospective Studies , Risk Assessment , Risk Factors , Time Factors , Wounds, Nonpenetrating/diagnosis , Young AdultABSTRACT
BACKGROUND: Over 6 million esophagogastroduodenoscopy (EGD) procedures are performed in the United States each year. Patients having anesthesia for advanced EGD procedures, such as interventional procedures, are at high risk for hypoxemia. METHODS: Our primary study aim was to evaluate whether high-flow nasal cannula (HFNC) oxygen reduces the incidence of hypoxemia during anesthesia for advanced EGD. Secondarily, we studied whether HFNC oxygen reduces hypercarbia or hypotension. After obtaining written informed consent, adults having anesthesia for advanced EGD, expected to last longer than 15 minutes, were randomly assigned to receive HFNC oxygen or standard nasal cannula (SNC) oxygen. The primary outcome was occurrence of one or more hypoxemia events during anesthesia, defined by arterial oxygen saturation <92% for at least 15 consecutive seconds. Secondary outcomes were occurrence of one or more hypercarbia or hypotension events. A hypercarbia event was defined by a transcutaneous CO2 measurement 20 mm Hg or more above baseline, and a hypotension event was defined by a mean arterial blood pressure measurement 25% or more below baseline. RESULTS: Two hundred seventy-one adult patients were enrolled and randomized, and 262 patients completed study procedures. Eight randomized patients did not complete study procedures due to changes in their anesthesia or endoscopy plan. One patient was excluded from analysis because their procedure was aborted after 1 minute. Patients who received HFNC oxygen (N = 132) had a significantly lower incidence of hypoxemia than those who received SNC oxygen (N = 130; 21.2% vs 33.1%; hazard ratio [HR] = 0.59 [95% confidence interval {CI}, 0.36-0.95]; P = .03). There was no difference in the incidence of hypercarbia or hypotension between the groups. The HR for hypercarbia with HFNC oxygen was 1.29 (95% CI, 0.89-1.88; P = .17), and the HR for hypotension was 1.25 (95% CI, 0.86-1.82; P = .25). CONCLUSIONS: HFNC oxygen reduces the incidence of hypoxemia during anesthesia for advanced EGD and may offer an opportunity to enhance patient safety during these procedures.
Subject(s)
Anesthesia, Intravenous , Cannula , Endoscopy, Digestive System , Hypoxia/prevention & control , Oxygen Inhalation Therapy/instrumentation , Oxygen/administration & dosage , Administration, Inhalation , Aged , Anesthesia, Intravenous/adverse effects , Baltimore , Female , Humans , Hypoxia/blood , Hypoxia/etiology , Male , Middle Aged , Oxygen/adverse effects , Oxygen/blood , Oxygen Inhalation Therapy/adverse effects , Protective Factors , Risk Factors , Time Factors , Treatment OutcomeABSTRACT
INTRODUCTION: On 1 October 2015, the USA transitioned from the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) to the International Classification of Diseases, 10th Revision (ICD-10-CM). Considering the major changes to drug overdose coding, we examined how using different approaches to define all-drug overdose and opioid overdose morbidity indicators in ICD-9-CM impacts longitudinal analyses that span the transition, using emergency department (ED) and hospitalisation data from six states' hospital discharge data systems. METHODS: We calculated monthly all-drug and opioid overdose ED visit rates and hospitalisation rates (per 100 000 population) by state, starting in January 2010. We applied three ICD-9-CM indicator definitions that included identical all-drug or opioid-related codes but restricted the number of fields searched to varying degrees. Under ICD-10-CM, all fields were searched for relevant codes. Adjusting for seasonality and autocorrelation, we used interrupted time series models with level and slope change parameters in October 2015 to compare trend continuity when employing different ICD-9-CM definitions. RESULTS: Most states observed consistent or increased capture of all-drug and opioid overdose cases in ICD-10-CM coded hospital discharge data compared with ICD-9-CM. More inclusive ICD-9-CM indicator definitions reduced the magnitude of significant level changes, but the effect of the transition was not eliminated. DISCUSSION: The coding change appears to have introduced systematic differences in measurement of drug overdoses before and after 1 October 2015. When using hospital discharge data for drug overdose surveillance, researchers and decision makers should be aware that trends spanning the transition may not reflect actual changes in drug overdose rates.
Subject(s)
Drug Overdose , International Classification of Diseases , Analgesics, Opioid , Drug Overdose/epidemiology , Humans , Interrupted Time Series Analysis , MorbidityABSTRACT
BACKGROUND: In October 2015, discharge data coding in the USA shifted to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), necessitating new indicator definitions for drug overdose morbidity. Amid the drug overdose crisis, characterising discharge records that have ICD-10-CM drug overdose codes can inform the development of standardised drug overdose morbidity indicator definitions for epidemiological surveillance. METHODS: Eight states submitted aggregated data involving hospital and emergency department (ED) discharge records with ICD-10-CM codes starting with T36-T50, for visits occurring from October 2015 to December 2016. Frequencies were calculated for (1) the position within the diagnosis billing fields where the drug overdose code occurred; (2) primary diagnosis code grouped by ICD-10-CM chapter; (3) encounter types; and (4) intents, underdosing and adverse effects. RESULTS: Among all records with a drug overdose code, the primary diagnosis field captured 70.6% of hospitalisations (median=69.5%, range=66.2%-76.8%) and 79.9% of ED visits (median=80.7%; range=69.8%-88.0%) on average across participating states. The most frequent primary diagnosis chapters included injury and mental disorder chapters. Among visits with codes for drug overdose initial encounters, subsequent encounters and sequelae, on average 94.6% of hospitalisation records (median=98.3%; range=68.8%-98.8%) and 95.5% of ED records (median=99.5%; range=79.2%-99.8%), represented initial encounters. Among records with drug overdose of any intent, adverse effect and underdosing codes, adverse effects comprised an average of 74.9% of hospitalisation records (median=76.3%; range=57.6%-81.1%) and 50.8% of ED records (median=48.9%; range=42.3%-66.8%), while unintentional intent comprised an average of 11.1% of hospitalisation records (median=11.0%; range=8.3%-14.5%) and 28.2% of ED records (median=25.6%; range=20.8%-40.7%). CONCLUSION: Results highlight considerations for adapting and standardising drug overdose indicator definitions in ICD-10-CM.
Subject(s)
Drug Overdose , International Classification of Diseases , Drug Overdose/epidemiology , Emergency Service, Hospital , Hospitals , Humans , Morbidity , Patient DischargeABSTRACT
PURPOSE: The association between intraoperative hypotension and perioperative acute ischemic stroke is not well described. We hypothesized that intraoperative hypotension would be associated with perioperative acute ischemic stroke. METHODS: Four-year retrospective cohort study of elective non-cardiovascular, non-neurological surgical patients. Characteristics of patients who had perioperative acute ischemic stroke were compared against those of patients who did not have acute ischemic stroke. Multivariable logistic regression was used to determine whether hypotension was independently associated with increased odds of perioperative acute ischemic stroke. RESULTS: Thirty-four of 9816 patients (0.3%) who met study inclusion criteria had perioperative acute ischemic stroke. Stroke patients were older and had more comorbidities including hypertension, coronary artery disease, diabetes mellitus, active tobacco use, chronic obstructive pulmonary disease, cerebral vascular disease, atrial fibrillation, and peripheral vascular disease (all P < 0.05). MAP < 65Ā mmHg was not associated with increased odds of acute ischemic stroke when modeled as a continuous or categorical variable. MAP < 60Ā mmHg for more than 20Ā min was independently associated with increased odds of acute ischemic stroke, OR = 2.67 [95% CI = 1.21 to 5.88, P = 0.02]. CONCLUSION: Our analysis suggests that when MAP is less than 60Ā mmHg for more than 20Ā min, there is increased odds of acute ischemic stroke. Further studies are needed to determine what MAP should be targeted during surgery to optimize cerebral perfusion and limit ischemic stroke risk.
Subject(s)
Brain Ischemia , Hypotension , Ischemic Stroke , Stroke , Brain Ischemia/epidemiology , Humans , Hypotension/complications , Hypotension/epidemiology , Postoperative Complications , Retrospective Studies , Risk Factors , Stroke/epidemiology , Stroke/etiologyABSTRACT
PURPOSE: Noise in the postanesthesia care unit (PACU) is a significant source of postoperative patient discomfort and can affect patient sleep and recovery. Interventions involving structural alterations in the environment reduce noise and improve patient satisfaction; however, there are no studies focusing on staff education as a method to reduce PACU noise. DESIGN: We designed and implemented a prospective PACU noise reduction program using education and training to minimize staff contributions to noise. METHODS: Noise levels, measured hourly with a decibel meter, patient satisfaction, and patient rest were assessed before and after implementation. FINDINGS: We found statistically significant decreases in noise levels and increases in patient satisfaction scores after the implementation of our noise reduction project. CONCLUSIONS: These findings demonstrate that an inexpensive and easily implemented noise reduction program can effectively reduce environmental noise, increase patient satisfaction, and potentially improve recovery.
Subject(s)
Noise , Patient Satisfaction , Recovery Room , Humans , Noise/prevention & control , Patient Satisfaction/statistics & numerical data , Program Evaluation , Prospective StudiesABSTRACT
State and local health departments in the United States are using various indicators to identify differences in rates of reported coronavirus disease 2019 (COVID-19) and severe COVID-19 outcomes, including hospitalizations and deaths. To inform mitigation efforts, on May 19, 2020, the Kentucky Department for Public Health (KDPH) implemented a reporting system to monitor five indicators of state-level COVID-19 status to assess the ability to safely reopen: 1) composite syndromic surveillance data, 2) the number of new COVID-19 cases,* 3) the number of COVID-19-associated deaths,Ā 4) health care capacity data, and 5) public health capacity for contact tracing (contact tracing capacity). Using standardized methods, KDPH compiles an indicator monitoring report (IMR) to provide daily analysis of these five indicators, which are combined with publicly available data into a user-friendly composite status that KDPH and local policy makers use to assess state-level COVID-19 hazard status. During May 19-July 15, 2020, Kentucky reported 12,742 COVID-19 cases, and 299 COVID-19-related deaths (1). The mean composite state-level hazard status during May 19-July 15 was 2.5 (fair to moderate). IMR review led to county-level hotspot identification (identification of counties meeting criteria for temporal increases in number of cases and incidence) and facilitated collaboration among KDPH and local authorities on decisions regarding mitigation efforts. Kentucky's IMR might easily be adopted by state and local health departments in other jurisdictions to guide decision-making for COVID-19 mitigation, response, and reopening.
Subject(s)
Coronavirus Infections/epidemiology , Coronavirus Infections/prevention & control , Epidemiological Monitoring , Pandemics/prevention & control , Pneumonia, Viral/epidemiology , Pneumonia, Viral/prevention & control , COVID-19 , Coronavirus Infections/mortality , Coronavirus Infections/therapy , Hospitalization/statistics & numerical data , Humans , Kentucky/epidemiology , Mortality/trends , Pneumonia, Viral/mortality , Pneumonia, Viral/therapy , Public Health PracticeABSTRACT
OBJECTIVES: Modern critical care amasses unprecedented amounts of clinical data-so called "big data"-on a minute-by-minute basis. Innovative processing of these data has the potential to revolutionize clinical prognostics and decision support in the care of the critically ill but also forces clinicians to depend on new and complex tools of which they may have limited understanding and over which they have little control. This concise review aims to provide bedside clinicians with ways to think about common methods being used to extract information from clinical big datasets and to judge the quality and utility of that information. DATA SOURCES: We searched the free-access search engines PubMed and Google Scholar using the MeSH terms "big data", "prediction", and "intensive care" with iterations of a range of additional potentially associated factors, along with published bibliographies, to find papers suggesting illustration of key points in the structuring and analysis of clinical "big data," with special focus on outcomes prediction and major clinical concerns in critical care. STUDY SELECTION: Three reviewers independently screened preliminary citation lists. DATA EXTRACTION: Summary data were tabulated for review. DATA SYNTHESIS: To date, most relevant big data research has focused on development of and attempts to validate patient outcome scoring systems and has yet to fully make use of the potential for automation and novel uses of continuous data streams such as those available from clinical care monitoring devices. CONCLUSIONS: Realizing the potential for big data to improve critical care patient outcomes will require unprecedented team building across disparate competencies. It will also require clinicians to develop statistical awareness and thinking as yet another critical judgment skill they bring to their patients' bedsides and to the array of evidence presented to them about their patients over the course of care.
Subject(s)
Big Data , Critical Care , Critical Illness , Data Accuracy , Decision Support Systems, Clinical , Humans , Patient Outcome AssessmentABSTRACT
BACKGROUND: Surgical patients receive platelet concentrates (PCs) for a variety of indications. However, there is limited evidence for efficacy or dosing of PCs. STUDY DESIGN AND METHODS: We performed a retrospective cohort study of surgical patients receiving isolated PC transfusion at a single academic tertiary medical center during 1 year. The primary outcome was reoperation for a bleeding complication. Bleeding complication rates were compared in patients transfused for different indications, and multivariable logistic regression was performed to determine variables associated with bleeding complications. RESULTS: Approximately 1% of surgical patients (n = 205), including 7% of cardiac surgery patients, received an isolated PC transfusion. Cardiac surgery patients accounted for 47% of isolated PC transfusions, followed by neurosurgery (19%) and gastrointestinal surgery (13%). Most patients (81%) received a single apheresis unit of PC. Common indications were antiplatelet drugs (50%), thrombocytopenia (19%), congenital platelet disorders (2%), and both thrombocytopenia and antiplatelet drugs (12%). Bleeding complications occurred in 23% of patients, with the lowest bleeding complication rate observed in patients transfused for antiplatelet drugs (13%) and the highest rate in patients transfused for thrombocytopenia with or without antiplatelet drugs (40% and 38%, respectively). Bleeding complications were more common in noncardiac surgery but had no association with transfusion indication. CONCLUSION: Despite transfusion for conventionally accepted indications, patients who received an isolated PC transfusion experienced a high rate of bleeding complications, particularly noncardiac surgery patients. Further studies are needed to establish optimal dosing, timing, and indications for perioperative PC transfusion.
Subject(s)
Platelet Transfusion/methods , Thrombocytopenia/therapy , Adult , Aged , Blood Transfusion/methods , Cohort Studies , Female , Hemorrhage/therapy , Humans , Logistic Models , Male , Middle Aged , Platelet Aggregation Inhibitors/therapeutic use , Retrospective StudiesABSTRACT
Resuscitative endovascular balloon occlusion of the aorta (REBOA) is a temporizing maneuver for noncompressible torso hemorrhage. To our knowledge, this single-center brief report provides the most extensive anesthetic data published to date on patients who received REBOA. As anticipated, patients were critically ill, exhibiting lactic acidosis, hypotension, hyperglycemia, hypothermia, and coagulopathy. All patients received blood products during their index operations and received less inhaled anesthetic gas than normally required for healthy patients of the same age. This study serves as an important starting point for clinician education and research into anesthetic management of patients undergoing REBOA.
Subject(s)
Anesthesia/methods , Aorta/surgery , Balloon Occlusion/methods , Endovascular Procedures/methods , Hemorrhage/therapy , Wounds and Injuries/surgery , Adult , Humans , Injury Severity Score , Middle Aged , Resuscitation , Retrospective StudiesABSTRACT
BACKGROUND: Acute traumatic coagulopathy is common in trauma patients. Prompt diagnosis of hypofibrinogenemia allows for early treatment with cryoprecipitate or fibrinogen concentrate. At present, optimal cutoffs for diagnosing hypofibrinogenemia with kaolin thrombelastography (TEG) have not been established. We hypothesized that kaolin kaolin-TEG parameters, such as kinetic time (K-time), α-angle, and maximum amplitude (MA), would accurately diagnose hypofibrinogenemia (fibrinogen <200 mg/dL) and severe hypofibrinogenemia (fibrinogen <100 mg/dL). METHODS: Adult trauma patients (injury severity score >15) presenting to our trauma center between October 2015 and October 2017 were identified retrospectively. All patients had a traditional plasma fibrinogen measurement and kaolin-TEG performed within 15 minutes of each other and within 1 hour of admission. Some patients had additional measurements after. Receiver operating characteristic (ROC) curve analysis was performed to evaluate whether K-time, α-angle, and MA could diagnose hypofibrinogenemia and severe hypofibrinogenemia. Area under the ROC curve (AUROC) was calculated for each TEG parameter with a bootstrapped 99% confidence interval (CI). Further, ROC analysis was used to estimate ideal cutoffs for diagnosing hypofibrinogenemia and severe hypofibrinogenemia by maximizing sensitivity and specificity. In addition, likelihood ratios were also calculated for different TEG variable cutoffs to diagnose hypofibrinogenemia and severe hypofibrinogenemia. RESULTS: Seven hundred twenty-two pairs of TEGs and traditional plasma fibrinogen measurements were performed in 623 patients with 99 patients having additional pairs of tests after the first hour. MA (AUROC = 0.84) and K-time (AUROC = 0.83) better diagnosed hypofibrinogenemia than α-angle (AUROC = 0.8; P = .03 and P < .001 for AUROC comparisons, respectively). AUROCs statistically improved for each parameter when severe hypofibrinogenemia was modeled as the outcome (P < .001). No differences were found between parameters for diagnosing severe hypofibrinogenemia (P > .05 for all comparisons). The estimated optimal cutoffs for diagnosing hypofibrinogenemia were 1.5 minutes for K-time (95% CI, 1.4-1.6), 70.0Ā° for α-angle (95% CI, 69.8-71.0), and 60.9 mm for MA (95% CI, 59.2-61.8). The estimated optimal cutoffs for diagnosing severe hypofibrinogenemia were 2.4 minutes for K-time (95% CI, 1.7-2.8), 60.6Ā° for α-angle (95% CI, 57.2-67.3), and 51.2 mm for MA (95% CI, 49.0-56.2). Currently recommended K-time and α-angle cutoffs from the American College of Surgeons had low sensitivity for diagnosing hypofibrinogenemia (3%-29%), but sensitivity improved to 74% when using optimal cutoffs. CONCLUSIONS: Kaolin-TEG parameters can accurately diagnose hypofibrinogenemia and severe hypofibrinogenemia in trauma patients. Currently recommended cutoffs for the treatment of hypofibrinogenemia are skewed toward high specificity and low sensitivity. Many patients are likely to be undertreated for hypofibrinogenemia using current national guidelines.
Subject(s)
Afibrinogenemia/diagnosis , Afibrinogenemia/therapy , Blood Coagulation , Fibrinogen/analysis , Resuscitation , Thrombelastography , Wounds and Injuries/diagnosis , Wounds and Injuries/therapy , Adult , Afibrinogenemia/blood , Afibrinogenemia/complications , Biomarkers/blood , Clinical Decision-Making , Early Diagnosis , Female , Humans , Injury Severity Score , Male , Middle Aged , Patient Selection , Predictive Value of Tests , Reproducibility of Results , Retrospective Studies , Time Factors , Wounds and Injuries/blood , Wounds and Injuries/complications , Young AdultABSTRACT
Improved prehospital methods for assessing the need for lifesaving interventions (LSIs) are needed to gain critical lead time in the care of the injured. We hypothesized that threshold values using prehospital handheld tissue oximetry would detect occult shock and predict LSI requirements. This was a prospective observational study of adult trauma patients emergently transported by helicopter. Patients were monitored with a handheld tissue oximeter (InSpectra Spot Check; Hutchinson Technology Inc, Hutchinson, MN), continuous vital signs, and 21 laboratory measurements obtained both in the field with a portable analyzer and at the time of admission. Shock was defined as base excess ≥ 4 or lactate > 3 mmol/L. Eighty-eight patients were enrolled with a median Injury Severity Score of 16 (interquartile range, 5-29). The median hemoglobin saturation in the capillaries, venules, and arterioles (StO2) value for all patients was 82% (interquartile range, 76%-87%; range, 42%-98%). StO2 was abnormal (< 75%) in 18 patients (20%). Eight were hypotensive (9%) and had laboratory-confirmed evidence of occult shock. StO2 correlated poorly with shock threshold laboratory values (rĆ¢ĀĀÆ=Ć¢ĀĀÆ-0.17; 95% confidence interval, -0.33 to 1.0; PĆ¢ĀĀÆ=Ć¢ĀĀÆ.94). The area under the receiver operating curve was 0.51 (95% confidence interval, 0.39-0.63) for StO2 < 75% and laboratory-confirmed shock. StO2 was not associated with LSI need on admission when adjusted for multiple covariates, nor was it independently associated with death. Handheld tissue oximetry was not sensitive or specific for identifying patients with prehospital occult shock. These results do not support prehospital StO2 monitoring despite its inclusion in several published guidelines.
Subject(s)
Oximetry/instrumentation , Oxygen/blood , Shock/diagnosis , Acid-Base Imbalance/blood , Adolescent , Adult , Aged , Aged, 80 and over , Area Under Curve , Emergency Medical Services , Female , Hemoglobins/metabolism , Humans , Lactic Acid/blood , Male , Middle Aged , Prospective Studies , ROC Curve , Shock/etiology , Wounds and Injuries/blood , Wounds and Injuries/complications , Young AdultSubject(s)
Fentanyl , Maternal-Fetal Exchange , Infant, Newborn , Humans , Pregnancy , Female , Analgesics, Opioid , Central Nervous SystemABSTRACT
BACKGROUND: Early hyperglycemia is associated with multiple organ failure (MOF) after traumatic injury; however, few studies have considered the contribution of depth of clinical shock. We hypothesize that when considered simultaneously, glucose and lactate are associated with MOF in severely injured blunt trauma patients. METHODS: We performed a retrospective investigation at a single tertiary care trauma center. Inclusion criteria were patient age ≥18 years, injury severity score (ISS) >15, blunt mechanism of injury, and an intensive care unit length of stay >48 hours. Patients with a history of diabetes or who did not survive the initial 48 hours were excluded. Demographics, injury severity, and physiologic data were recorded. Blood glucose and lactate values were collected from admission through the initial 24 hours of hospitalization. Multiple metrics of glucose and lactate were calculated: the first glucose (Glucadm, mg/dL) and lactate (Lacadm, mmol/L) at hospital admission, the mean initial 24-hour glucose (Gluc24hMean, mg/dL) and lactate (Lac24hMean, mmol/L), and the time-weighted initial 24-hour glucose (Gluc24hTW) and lactate (Lac24hTW). These metrics were divided into quartiles. The primary outcome was MOF. Separate Cox proportional hazard models were generated to assess the association of each individual glucose and lactate metric on MOF, after controlling for ISS, admission shock index, and disposition to the operating room after hospital admission. We assessed the interaction between glucose and lactate metrics in the multivariable models. Results are reported as hazard ratios (HRs) for an increase in the quartile level of glucose and lactate measurements, with 95% confidence intervals (CIs). RESULTS: A total of 507 severely injured blunt trauma patients were evaluated. MOF occurred in 46 of 507 (9.1%) patients and was associated with a greater median ISS (33.5, interquartile range [IQR]: 22-41 vs 27, IQR: 21-34; P < .001) and a greater median admission shock index (0.82, IQR: 0.68-1.1 vs 0.73, IQR: 0.60-0.91; P = .02). Patients who were transferred to the operating room after the initial trauma resuscitation were also more likely to develop MOF (20 of 119, 14.4% vs 26 of 369, 7.1%; P = .01). Three separate Cox proportional regression models demonstrated the following HR for an increase in the individual glucose metric quartile and MOF, while controlling for confounding variables: Glucadm HR: 1.35, 95% CI, 1.02-1.80; Gluc24hMean HR: 1.63, 95% CI, 1.14-2.32; Gluc24hTW HR: 1.14, 95% CI, 0.86-1.50. Three separate Cox proportional hazards models also demonstrated the following HR for each individual lactate metric quartile while controlling for the same confounders, with MOF again representing the dependent variable: Lacadm HR: 1.94, 95% CI, 1.38-2.96; Lac24hMean HR: 1.68, 95% CI, 1.22-2.31; Lac24hTW HR: 1.49, 95% CI, 1.10-2.02. When metrics of both glucose and lactate were entered into the same model only lactate remained significantly associated with MOF: Lacadm HR: 1.86, 95% CI, 1.29-2.69, Lac24hMean HR: 1.54, 95% CI, 1.11-2.12, and Lac24hTW HR: 1.48, 95% CI, 1.08-2.01. There was no significant interaction between lactate and glucose variables in relation to the primary outcome. CONCLUSIONS: When glucose and lactate are considered simultaneously, only lactate remained significantly associated with MOF in severely injured blunt trauma patients.
Subject(s)
Hyperglycemia/blood , Injury Severity Score , Lactic Acid/blood , Multiple Organ Failure/blood , Wounds, Nonpenetrating/blood , Adult , Biomarkers/blood , Blood Glucose/metabolism , Cohort Studies , Female , Humans , Hyperglycemia/diagnosis , Hyperglycemia/epidemiology , Male , Middle Aged , Multiple Organ Failure/diagnosis , Multiple Organ Failure/epidemiology , Retrospective Studies , Wounds, Nonpenetrating/diagnosis , Wounds, Nonpenetrating/epidemiologyABSTRACT
OBJECTIVE: To review rates of permanent paraplegia and lumbar drain-related complications in patients undergoing thoracic endovascular aortic repair (TEVAR) surgery with prophylactic cerebrospinal fluid (CSF) drainage at the authors' institution. DESIGN: Retrospective cohort study. SETTING: Tertiary care, academic medical center. PARTICIPANTS: Patients who underwent TEVAR with a high risk for ischemic spinal cord injury and prophylactic lumbar CSF drainage over a 5-year period. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: One hundred and two patients underwent TEVAR with lumbar CSF drainage. Thirty-day mortality was 5.9%, and the rate of permanent paraplegia was 2%. Drain complications occurred in 4 (3.9%) patients, but no patient experienced permanent injury related to CSF drainage. Two patients in the cohort had complete resolution of paraplegia with increased CSF drainage and mean arterial blood pressure increases aimed to increase spinal cord perfusion pressure by 25%. A third patient experienced improvement in lower extremity strength but remained paraplegic, and a fourth patient demonstrated no improvement in symptoms. The 6 patients taking clopidogrel experienced no bleeding complications, and there were no apparent risk factors for bleeding in the 5 patients who had bloody drain output or in 1 patient who developed an epidural hematoma. CONCLUSION: Prophylactic CSF drainage was associated with low paraplegia and drain-related complication rates. These data further support the safety of prophylactic CSF drainage in patients undergoing TEVAR with a high risk for ischemic spinal cord injury.
Subject(s)
Aorta, Thoracic/surgery , Cerebrospinal Fluid , Drainage/adverse effects , Endovascular Procedures/methods , Spinal Cord Injuries/prevention & control , Spinal Cord Ischemia/prevention & control , Aged , Female , Humans , Male , Middle Aged , Retrospective Studies , Risk , Spinal Cord Injuries/epidemiology , Spinal Cord Ischemia/epidemiologyABSTRACT
BACKGROUND: Kidney transplant patients are frequently anemic and at risk for red blood cell (RBC) transfusion. Previous studies suggest that pre-transplant RBC transfusion may improve kidney transplant outcomes; however, RBC transfusion is also associated with infection. The purpose of our study was to characterize the relationships between intraoperative RBC transfusion, delayed graft function (DGF), postoperative surgical site infection (SSI), and sepsis. METHODS: Analysis was performed on a historical cohort of adult kidney transplant patients from a single medical center during a two-year period. Crude odds ratios for DGF, superficial and deep SSI, and sepsis were calculated for transfused patients and multivariate regression was used to control for potential confounders when significant relationships were identified. RESULTS: Four hundred forty-one patients had kidney transplant during the study period; 27.0% had RBC transfusion, 38.8% had DGF, 7.0% had superficial SSI, 7.9% had deep SSI, and 1.8% had sepsis. High dose RBC transfusion was associated with improved graft function, but this was negated after adjusting for confounders (OR = 0.86, 95% CI 0.26 to 2.88). There was no association between RBC transfusion and SSI. RBC transfusion was independently associated with sepsis (OR = 8.98, 95% CI 1.52 to 53.22), but the confidence interval was wide. CONCLUSIONS: Intraoperative RBC transfusion during kidney transplant is not associated with improved allograft function or incisional SSI, but is associated with postoperative sepsis. RBCs should not be liberally transfused during kidney transplant surgery to improve graft outcomes.