Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
2.
Mil Med ; 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38913446

RESUMO

INTRODUCTION: Hemorrhage is assessed, at least in part, via hematocrit testing. To differentiate unexpected drops in hematocrit because of ongoing hemorrhage versus expected drops as a result of known hemorrhage and intravenous fluid administration, we model expected post-operative hematocrit values accounting for fluid balance and intraoperative estimated blood loss (EBL) among patients without substantial post-operative bleeding. MATERIALS AND METHODS: We reviewed patient-level data from the electronic health record of an academic medical center for all non-pregnant adults admitted for elective knee or hip arthroplasty from November 2013 to September 2022 who did not require blood products. We used linear regression to evaluate the association between post-operative hematocrit and predictor variables including pre-operative hematocrit, intraoperative net fluid intake, blood volume, time from surgery to lab testing, EBL, patient height, and patient weight. RESULTS: We included 6,648 cases. Mean (SD) estimated blood volume was 4,804 mL (1023), mean net fluid intake was 1,121 mL (792), and mean EBL was 144 mL (194). Each 100 mL of EBL and 1,000 mL net positive fluid intake was associated with a decrease of 0.52 units (95% CI, 0.51-0.53) and 2.4 units (2.2-2.7) in post-operative hematocrit. Pre-operative hematocrit was the strongest predictor of post-operative hematocrit. Each 1-unit increase in pre-operative hematocrit was associated with a 0.70-unit increase (95% CI, 0.67-0.73) in post-operative hematocrit. Our estimates were robust to sensitivity analyses, and all variables included in the model were statistically significant with P <.005. CONCLUSION: Patient-specific data, including fluid received since the time of initial hemorrhage, can aid in estimating expected post-hemorrhage hematocrit values, and thus in assessing for the ongoing hemorrhage.

3.
Surg Endosc ; 37(6): 4321-4327, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36729231

RESUMO

BACKGROUND: Surgical video recording provides the opportunity to acquire intraoperative data that can subsequently be used for a variety of quality improvement, research, and educational applications. Various recording devices are available for standard operating room camera systems. Some allow for collateral data acquisition including activities of the OR staff, kinematic measurements (motion of surgical instruments), and recording of the endoscopic video streams. Additional analysis through computer vision (CV), which allows software to understand and perform predictive tasks on images, can allow for automatic phase segmentation, instrument tracking, and derivative performance-geared metrics. With this survey, we summarize available surgical video acquisition technologies and associated performance analysis platforms. METHODS: In an effort promoted by the SAGES Artificial Intelligence Task Force, we surveyed the available video recording technology companies. Of thirteen companies approached, nine were interviewed, each over an hour-long video conference. A standard set of 17 questions was administered. Questions spanned from data acquisition capacity, quality, and synchronization of video with other data, availability of analytic tools, privacy, and access. RESULTS: Most platforms (89%) store video in full-HD (1080p) resolution at a frame rate of 30 fps. Most (67%) of available platforms store data in a Cloud-based databank as opposed to institutional hard drives. CV powered analysis is featured in some platforms: phase segmentation in 44% platforms, out of body blurring or tool tracking in 33%, and suture time in 11%. Kinematic data are provided by 22% and perfusion imaging in one device. CONCLUSION: Video acquisition platforms on the market allow for in depth performance analysis through manual and automated review. Most of these devices will be integrated in upcoming robotic surgical platforms. Platform analytic supplementation, including CV, may allow for more refined performance analysis to surgeons and trainees. Most current AI features are related to phase segmentation, instrument tracking, and video blurring.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos , Humanos , Endoscopia , Software , Privacidade , Gravação em Vídeo
4.
Surg Endosc ; 37(3): 2260-2268, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35918549

RESUMO

BACKGROUND: Many surgical adverse events, such as bile duct injuries during laparoscopic cholecystectomy (LC), occur due to errors in visual perception and judgment. Artificial intelligence (AI) can potentially improve the quality and safety of surgery, such as through real-time intraoperative decision support. GoNoGoNet is a novel AI model capable of identifying safe ("Go") and dangerous ("No-Go") zones of dissection on surgical videos of LC. Yet, it is unknown how GoNoGoNet performs in comparison to expert surgeons. This study aims to evaluate the GoNoGoNet's ability to identify Go and No-Go zones compared to an external panel of expert surgeons. METHODS: A panel of high-volume surgeons from the SAGES Safe Cholecystectomy Task Force was recruited to draw free-hand annotations on frames of prospectively collected videos of LC to identify the Go and No-Go zones. Expert consensus on the location of Go and No-Go zones was established using Visual Concordance Test pixel agreement. Identification of Go and No-Go zones by GoNoGoNet was compared to expert-derived consensus using mean F1 Dice Score, and pixel accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). RESULTS: A total of 47 frames from 25 LC videos, procured from 3 countries and 9 surgeons, were annotated simultaneously by an expert panel of 6 surgeons and GoNoGoNet. Mean (± standard deviation) F1 Dice score were 0.58 (0.22) and 0.80 (0.12) for Go and No-Go zones, respectively. Mean (± standard deviation) accuracy, sensitivity, specificity, PPV and NPV for the Go zones were 0.92 (0.05), 0.52 (0.24), 0.97 (0.03), 0.70 (0.21), and 0.94 (0.04) respectively. For No-Go zones, these metrics were 0.92 (0.05), 0.80 (0.17), 0.95 (0.04), 0.84 (0.13) and 0.95 (0.05), respectively. CONCLUSIONS: AI can be used to identify safe and dangerous zones of dissection within the surgical field, with high specificity/PPV for Go zones and high sensitivity/NPV for No-Go zones. Overall, model prediction was better for No-Go zones compared to Go zones. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.


Assuntos
Colecistectomia Laparoscópica , Cirurgiões , Humanos , Colecistectomia Laparoscópica/efeitos adversos , Inteligência Artificial , Coleta de Dados , Colecistectomia
5.
BMJ Case Rep ; 15(8)2022 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-36041774

RESUMO

Gastric pneumatosis, the presence of air within the stomach wall, is a very rare occurrence with poor outcomes. One of the most common mechanisms for gastric pneumatosis is gastric ischaemia, also a rare entity. Although patients with gastric ischaemia may require surgical intervention, they can often be treated with conservative measures such as a proton pump inhibitor, broad-spectrum antibiotics, nasogastric tube decompression, fluid resuscitation and total parenteral nutrition. We report a rare case of gastric ischaemia and pneumatosis following therapeutic left gastric artery argon plasma coagulation that was treated with conservative measures.


Assuntos
Tratamento Conservador , Artéria Gástrica , Humanos , Intubação Gastrointestinal/efeitos adversos , Isquemia , Estômago/irrigação sanguínea
6.
BMJ Case Rep ; 15(6)2022 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-35732358

RESUMO

Sodium glucose cotransporter-2 (SGLT2) inhibitors are glucose-lowering drugs with proven efficacy in treating type 2 diabetes mellitus, and more recently, have been shown to improve heart failure outcomes in patients without diabetes. A rare complication of SGLT2 inhibitor use is the development of euglycaemic diabetic ketoacidosis (EDKA), characterised by euglycaemia (blood glucose level <250 mg/dL), metabolic acidosis (arterial pH <7.3 and serum bicarbonate <18 mEq/L), and ketonaemia. Given patients with EDKA do not present with the typical manifestations of diabetic ketoacidosis, including marked hyperglycaemia and dehydration, the diagnosis of EDKA may be missed and initiation of treatment delayed. We present the case of a man with recent SGLT2 inhibitor use and multiple other risk factors who developed EDKA.


Assuntos
Diabetes Mellitus Tipo 2 , Cetoacidose Diabética , Pancreatite , Inibidores do Transportador 2 de Sódio-Glicose , Compostos Benzidrílicos , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/tratamento farmacológico , Cetoacidose Diabética/induzido quimicamente , Cetoacidose Diabética/diagnóstico , Cetoacidose Diabética/tratamento farmacológico , Glucosídeos , Humanos , Masculino , Pancreatite/complicações , Inibidores do Transportador 2 de Sódio-Glicose/efeitos adversos
7.
Ann Glob Health ; 88(1): 25, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35509431

RESUMO

Background: Cleft lip/palate (CLP) is a congenital orofacial anomaly appearing in approximately one in 700 births worldwide. While in high-income countries CLP is normally addressed surgically during infancy, in developing countries CLP is often left unoperated, potentially impacting multiple dimensions of life quality. Previous research has frequently compared CLP outcomes to those of the general population. But because local environmental and genetic factors contribute to the risk of CLP and also may influence life outcomes, such studies may downwardly bias estimates of both CLP status and correction. Objectives: This research represents the first study to use causal econometric methods to estimate the effects of both CLP status and CLP correction on the physical, social, and mental well-being of children. Methods: Data were collected first-hand from 1,118 Indian children, where we obtained first-hand data on height, weight, grip strength, cognitive ability, reading, and math ability. A professional speech therapist reviewed digital recordings of speech taken at the interview to obtain four measures of speech quality. Using this data, the household fixed-effects model we employ jointly estimates effects of CLP status and CLP surgical intervention. Findings: Our results indicate that adolescents with median-level CLP severity show statistically significant losses in indices of speech quality (-1.59σ), physical well-being (0.32σ), academic and cognitive ability (-0.37σ), and social integration (-0.32σ). We find strong evidence that CLP surgery significantly restores speech if performed before five years of age. The first surgeries performed on less-severe CLP cases significantly restore social integration, psychological well-being, academic/cognitive ability, and a general index of human flourishing. Conclusions: Children born with CLP in India face statistically significant losses in speech, physical health, mental health, and social inclusion. CLP surgical intervention significantly restores speech quality if carried out at an early age. Surgeries with the most significant impact on life outcomes are the first surgeries performed on less-severe CLP cases.


Assuntos
Fenda Labial , Fissura Palatina , Doenças Musculoesqueléticas , Adolescente , Criança , Fenda Labial/cirurgia , Fissura Palatina/cirurgia , Face , Humanos , Qualidade de Vida
8.
JCO Oncol Pract ; 18(6): e849-e856, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35254868

RESUMO

PURPOSE: Recent literature suggests an increasing use of systemic treatment in patients with advanced cancer near the end of life (EOL), partially driven by the increasing adoption of immune checkpoint inhibitors (ICIs). While studies have identified this trend, additional variables associated with ICI use at EOL are limited. Our aim was to characterize a population of patients who received a dose of ICI in the last 30 days of life. METHODS: We performed a manual retrospective chart review of patients ≥ 18 years who died within 30 days of receiving a dose of ICI. Metrics such as Eastern Cooperative Oncology Group performance status (ECOG PS), number of ICI doses, need for hospitalization, and numerous other variables were evaluated. RESULTS: Over a 4-year time period, 97 patients received an ICI at EOL. For 40% of patients, the ICI given in the 30 days before death was their only dose. Over 50% of patients had an ECOG PS of ≥ 2, including 17% of patients with an ECOG PS of 3. Over 60% were hospitalized, 65% visited the emergency department, 20% required intensive care unit admission, and 25% died in the hospital. CONCLUSION: Our study contributes to the ongoing literature regarding the risks and benefits of ICI use in patients with advanced cancer near the EOL. While accurate predictions regarding the EOL are challenging, oncologists may routinely use clinical factors such as ECOG PS along with patient preferences to guide recommendations and shared decision making. Ultimately, further follow-up studies to better characterize and prognosticate this population of patients are needed.


Assuntos
Inibidores de Checkpoint Imunológico , Neoplasias , Morte , Humanos , Inibidores de Checkpoint Imunológico/farmacologia , Inibidores de Checkpoint Imunológico/uso terapêutico , Neoplasias/terapia , Preferência do Paciente , Estudos Retrospectivos
9.
Transplant Direct ; 8(2): e1280, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35047662

RESUMO

BACKGROUND: Donor liver biopsy (DLBx) in liver transplantation provides information on allograft quality; however, predicting outcomes from these allografts remains difficult. METHODS: Between 2006 and 2015, 16 691 transplants with DLBx were identified from the Standard Transplant Analysis and Research database. Cox proportional hazard regression analyses identified donor and recipient characteristics associated with 30-d, 90-d, 1-y, and 3-y graft survival. A composite model, the Liver Transplant After Biopsy (LTAB) score, was created. The Mini-LTAB was then derived consisting of only donor age, macrosteatosis on DLBx, recipient model for end-stage liver disease score, and cold ischemic time. Risk groups were identified for each score and graft survival was evaluated. P values <0.05 were considered significant. RESULTS: The LTAB model used 14 variables and 5 risk groups and identified low-, mild-, moderate-, high-, and severe-risk groups. Compared with moderate-risk recipients, severe-risk recipients had increased risk of graft loss at 30 d (hazard ratio, 3.270; 95% confidence interval, 2.568-4.120) and at 1 y (2.258; 1.928-2.544). The Mini-LTAB model identified low-, moderate-, and high-risk groups. Graft survival in Mini-LTAB high-risk transplants was significantly lower than moderate- or low-risk transplants at all time points. CONCLUSIONS: The LTAB and Mini-LTAB scores represent guiding principles and provide clinically useful tools for the successful selection and utilization of marginal allografts in liver transplantation.

10.
Am J Surg ; 221(1): 227-232, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32778397

RESUMO

BACKGROUND: This study investigates the impact of standing electric scooter-related injuries within an entire integrated hospital system. METHODS: We performed a retrospective review of patients involved in standing electric scooter incidents presenting throughout an urban hospital network over a 10 month period. Rates of Google searches of scooter-related terms performed locally were used as a surrogate for ride frequency. Injury, mechanism, and cost data were analyzed. RESULTS: Data on 248 patients were reviewed. Twenty-three (9%) were under 18 years old. Loss of balance was the most common cause of injury accounting for nearly half, while tripping over a scooter 14 (6%) affected the elderly disproportionately. Eight (3%) riders wore helmets. All TBI and closed head injuries occurred in unhelmeted patients. Most incidents occurred in the street, only one in a bicycle lane. Facilities costs were greater for patients under the influence of alcohol and marijuana. CONCLUSION: Policies related to the use of mandated safety equipment, dedicated bicycle lanes, and the proper storage of empty vehicles should be further investigated.


Assuntos
Lesões Acidentais/epidemiologia , Veículos Off-Road , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Estados Unidos , Adulto Jovem
11.
Endosc Int Open ; 8(11): E1717-E1724, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33140030

RESUMO

Background and study aims Endoscopic ultrasound (EUS) has been used for portal vein sampling in patients with pancreaticobiliary cancers for enumerating circulating tumor cells but is not yet a standard procedure. Further evaluation is needed to refine the methodology. Therefore, we evaluated the feasibility and safety of 19-gauge (19G) versus a 22-gauge (22 G) EUS fine-needle aspiration needles for portal vein sampling in a swine model. Methods Celiotomy was performed on two farm pigs. Portal vein sampling occurred transhepatically. We compared 19 G and 22 G needles coated interiorly with saline, heparin or ethylenediaminetetraacetic acid (EDTA). Small- (10 mL) and large- (25 mL) volume blood collections were evaluated. Two different collection methods were tested: direct-to-vial and suction syringe. A bleeding risk trial for saline-coated 19 G and 22 G needles was performed by puncturing the portal vein 20 times. Persistent bleeding after 3 minutes was considered significant. Results All small-volume collection trials were successful except for 22 G saline-coated needles with direct-to-vial method. All large-volume collection trials were successful when using suction syringe; direct-to-vial method for both 19 G and 22 G needles were unsuccessful. Collection times were shorter for 19 G vs. 22 G needles for both small and large-volume collections ( P  < 0.05). Collection times for saline-coated 22 G needles were longer compared to heparin/EDTA-coated ( P  < 0.05). Bleeding occurred in 10 % punctures with 19 G needles compared to 0 % with 22 G needles. Conclusion The results of this animal study demonstrate the feasibility and the safety of using 22 G needles for portal vein sampling and can form the basis for a pilot study in patients.

12.
PLoS One ; 15(4): e0230995, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32240235

RESUMO

BACKGROUND: Historically, liver allografts with >30% macrosteatosis (MaS) on donor biopsy have been associated with early allograft dysfunction and worse graft survival; however, successful outcomes have been reported in small cohorts. This study proposes an elevated MaS threshold for organ utilization without detriment to graft survival. METHODS: The UNOS Standard Transplant Analysis and Research database was evaluated for transplants between 2006-2015. Graft survival up to 1-year was evaluated by Kaplan-Meier (KM) survival analyses, and by univariate and multivariable logistic regression analyses, including donor and recipient characteristics. Odds ratios (OR) with 95% confidence intervals (CI) for risk of graft loss are reported. RESULTS: Thirty-day risk of graft loss was increased with MaS as low as 10-19% (OR [95% CI] 1.301 [1.055-1.605], p<0.0001) and peaked with MaS 50-59% (2.921 [1.672-5.103]). At 1-year, risk of graft loss remained elevated with MaS 40-49% (1.465 [1.002-2.142]) and MaS 50-59% (1.978 [1.281-3.056], p = 0.0224). Multivariable models were created for Lower and Higher MELD recipients and MaS cutoffs were established. In Lower MELD recipients, organs with ≥50% MaS had increased risk of graft loss at 30 days (2.451 [1.541-3.897], p = 0.0008) and 1-year post-transplant (1.720 [1.224-2.418], p = 0.0125). Higher MELD recipients had increased risk of graft loss at 30 days with allografts showing MaS ≥40% (4.204 [1.440-5.076], p = 0.0016). At 1-year the risk remained increased, but MaS was not significant predictor of graft loss.048 [1.131-3.710], p = 0.0616). In both MELD cohorts, organs with MaS levels below threshold had similar survival to those transplanted without a donor biopsy. CONCLUSIONS: In conjunction with recipient selection, organs with MaS up to 50% may be safely used without detriment to outcomes.


Assuntos
Aloenxertos/cirurgia , Sobrevivência de Enxerto/fisiologia , Transplante de Fígado/mortalidade , Adulto , Bases de Dados Factuais , Seleção do Doador/métodos , Feminino , Humanos , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Razão de Chances , Estudos Retrospectivos , Fatores de Risco , Doadores de Tecidos , Transplante Homólogo/mortalidade , Estados Unidos , Adulto Jovem
13.
Prostate Cancer Prostatic Dis ; 23(2): 295-302, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31719663

RESUMO

BACKGROUND: Genomic classifiers (GC) have been shown to improve risk stratification post prostatectomy. However, their clinical benefit has not been prospectively demonstrated. We sought to determine the impact of GC testing on postoperative management in men with prostate cancer post prostatectomy. METHODS: Two prospective registries of prostate cancer patients treated between 2014 and 2019 were included. All men underwent Decipher tumor testing for adverse features post prostatectomy (Decipher Biosciences, San Diego, CA). The clinical utility cohort, which measured the change in treatment decision-making, captured pre- and postgenomic treatment recommendations from urologists across diverse practice settings (n = 3455). The clinical benefit cohort, which examined the difference in outcome, was from a single academic institution whose tumor board predefined "best practices" based on GC results (n = 135). RESULTS: In the clinical utility cohort, providers' recommendations pregenomic testing were primarily observation (69%). GC testing changed recommendations for 39% of patients, translating to a number needed to test of 3 to change one treatment decision. In the clinical benefit cohort, 61% of patients had genomic high-risk tumors; those who received the recommended adjuvant radiation therapy (ART) had 2-year PSA recurrence of 3 vs. 25% for those who did not (HR 0.1 [95% CI 0.0-0.6], p = 0.013). For the genomic low/intermediate-risk patients, 93% followed recommendations for observation, with similar 2-year PSA recurrence rates compared with those who received ART (p = 0.93). CONCLUSIONS: The use of GC substantially altered treatment decision-making, with a number needed to test of only 3. Implementing best practices to routinely recommend ART for genomic-high patients led to larger than expected improvements in early biochemical endpoints, without jeopardizing outcomes for genomic-low/intermediate-risk patients.


Assuntos
Biomarcadores Tumorais/genética , Tomada de Decisões , Seleção de Pacientes , Prostatectomia/métodos , Neoplasias da Próstata/genética , Neoplasias da Próstata/terapia , Medição de Risco/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Seguimentos , Perfilação da Expressão Gênica , Genômica , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Neoplasias da Próstata/classificação , Neoplasias da Próstata/patologia , Taxa de Sobrevida
14.
Am J Surg ; 219(1): 54-57, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31400811

RESUMO

BACKGROUND: The Warkentin 4-T scoring system for determining the pretest probability of heparin-induced thrombocytopenia (HIT) has been shown to be inaccurate in the ICU and does not take into account body mass index (BMI). METHODS: Prospectively collected data on patients in the surgical and cardiac ICU between January 2007 and February 2016 who were presumed to have HIT by clinical suspicion were reviewed. Patients were categorized into 3 BMI groups and assigned scores: Normal weight, overweight, and obese. Multivariate analyses were used to identify independent predictors of HIT. RESULTS: A total of 523 patients met inclusion criteria. Multivariate analysis showed that only BMI, Timing, and oTher variables were independently associated with HIT. This new 3-T model was better than a five-component model consisting of the entire 4-T scoring system plus BMI (AUC = 0.791). CONCLUSIONS: Incorporating patient 'T'hickness into a pretest probability model along with platelet 'T'iming and the exclusion of o'T'her causes of thrombocytopenia yields a simplified "3-T" scoring system that has increased predictive accuracy in the ICU.


Assuntos
Índice de Massa Corporal , Fibrinolíticos/efeitos adversos , Heparina/efeitos adversos , Modelos Teóricos , Trombocitopenia/induzido quimicamente , Adulto , Idoso , Feminino , Previsões , Humanos , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
15.
J Am Coll Surg ; 228(4): 437-450.e8, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30594593

RESUMO

BACKGROUND: The Share 35 policy for liver allocation prioritizes patients with Model for End-Stage Liver Disease (MELD) scores ≥ 35 for regional sharing of liver allografts. To better assess donor-recipient interactions and inform expectations, this study identified factors affecting graft survival independent of MELD score and derived a risk index for transplantation in the MELD ≥ 35 population. STUDY DESIGN: The United Network for Organ Sharing (UNOS) STAR database was evaluated for deceased donor liver transplants with recipients' MELD ≥ 35, between January 2006 and June 2016. Data were randomly split into test and validate cohorts. Four individual models of graft survival spanning 90 days to 5 years were evaluated with univariate and multivariate Cox proportional hazards analyses against donor- and recipient-specific characteristics. Significant factors were compiled to generate the Liver Transplant Survival Index (LTSI-35), and survival analyses were performed. RESULTS: Five risk groups (very low, low, moderate, high, and severe) were identified, with 1-year graft survival rates of 90.8% ± 0.2%, 89.3% ± 0.3%, 85.0% ± 0.3%, 79.8% ± 0.3%, and 70.3% ± 0.4% (p < 0.001 across groups), respectively. The greatest risk of graft loss was associated with donation after circulatory death (DCD) donors (1-year hazard ratio [HR] = 1.61 [95% CI 1.26 to 2.05], p = 0.001), recipients' requiring ventilator support (HR 1.32 [95% CI 1.17 to 1.51], p < 0.001), and recipient portal vein thrombosis (HR 1.21 [95% CI 1.03 to 1.42], p = 0.003). Subgroup analysis revealed increased risk of graft loss with graft macrosteatosis ≥ 30% on pre-donation biopsy at 90 days (HR 1.64 [1.33 to 1.99], p < 0.001). CONCLUSIONS: The LTSI-35 identifies risk factors for graft loss in a high-MELD population which, when combined, may portend worse outcomes. The LTSI-35 may be used to influence donor selection, organ allocation, and to inform expectations for allograft survival.


Assuntos
Doença Hepática Terminal/cirurgia , Sobrevivência de Enxerto , Transplante de Fígado , Índice de Gravidade de Doença , Adulto , Seleção do Doador , Feminino , Seguimentos , Alocação de Recursos para a Atenção à Saúde , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Risco Ajustado , Medição de Risco , Fatores de Risco , Análise de Sobrevida
16.
Transplantation ; 103(1): 122-130, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30048394

RESUMO

BACKGROUND: Obesity, defined as a high body mass index (hBMI) of 30 kg/m or greater, is a growing epidemic worldwide and is associated with multiple comorbidities. High BMI individuals account for an increasing portion of potential liver donors. Here we evaluate trends in the utilization and outcomes of hBMI donors on a national and regional level and the potential role of liver biopsy in donor evaluation. METHODS: United Network for Organ Sharing Standard Transplant Analysis and Research database was evaluated for deceased donor liver transplants between 2006 and 2016 across 11 Organ Procurement and Transplantation Network regions. High BMI donors were compared with lower BMI counterparts and evaluated for biopsy rates, utilization rates and allograft outcomes. Univariate and multivariable analyses were performed. RESULTS: Seventy-seven thousand fifty potential donors were identified and 60 200 transplants were evaluated. Utilization rates for hBMI donors were 66.1% versus 78.1% for lower BMI donors (P < 0.001). Pretransplant biopsy was performed more frequently in hBMI donors (52.1% vs 33.1%, P < 0.001) and macrosteatosis of 30% or greater was identified more often (21.1% vs 12.2%, P < 0.001). Biopsy performance increased utilization rate of hBMI donors in 7 of 11 Organ Procurement and Transplantation Network regions. region 6 showed the highest rate of biopsy performance, high rate of hBMI donor utilization, and highest 5-year estimated graft survival rates of all regions. CONCLUSIONS: High BMI donors have not previously been associated with worse graft survival in multivariable analyses; however, they are used much less frequently. Liver biopsy may increase the utilization rate of hBMI donors and improve donor selection. Further evaluation of regions with high rates of utilization and good outcomes is warranted.


Assuntos
Índice de Massa Corporal , Seleção do Doador/tendências , Fígado Gorduroso/patologia , Disparidades em Assistência à Saúde/tendências , Transplante de Fígado/tendências , Obesidade/diagnóstico , Doadores de Tecidos/provisão & distribuição , Aloenxertos , Biópsia/tendências , Bases de Dados Factuais , Fígado Gorduroso/epidemiologia , Sobrevivência de Enxerto , Humanos , Transplante de Fígado/efeitos adversos , Transplante de Fígado/métodos , Obesidade/epidemiologia , Valor Preditivo dos Testes , Prevalência , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento , Estados Unidos/epidemiologia
17.
J Surg Educ ; 74(5): 851-856, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28347663

RESUMO

OBJECTIVE: The objective of the study was to characterize house staff time to response and intervention when notified of a patient care issue by pager vs. smartphone. We hypothesized that smartphones would reduce house staff time to response and intervention. DESIGN: Prospective study of all electronic communications was conducted between nurses and house staff between September 2015 and October 2015. The 4-week study period was randomly divided into two 2-week study periods where all electronic communications between intensive care unit nurses and intensive care unit house staff were exclusively by smartphone or by pager, respectively. Time of communication initiation, time of house staff response, and time from response to clinical intervention for each communication were recorded. Outcomes are time from nurse contact to house staff response and intervention. SETTING: Single-center surgical intensive care unit of Cedars-Sinai Medical Center in Los Angeles, California, an academic tertiary care and level I trauma center. PARTICIPANTS: All electronic communications occurring between nurses and house staff in the study unit during the study period were considered. During the study period, 205 nurse-house staff electronic communications occurred, 100 in the phone group and 105 in the pager group. RESULTS: House staff response to communication time was significantly shorter in the phone group (0.5 [interquartile range = 1.7] vs. 2 [3]min, p < 0.001). Time to house staff intervention after response was also significantly more rapid in the phone group (0.8 [1.7] vs. 1 [2]min, p = 0.003). CONCLUSIONS: Dedicated clinical smartphones significantly decrease time to house staff response after electronic nursing communications compared with pagers.


Assuntos
Comunicação , Enfermagem de Cuidados Críticos/métodos , Unidades de Terapia Intensiva , Corpo Clínico Hospitalar/organização & administração , Smartphone/estatística & dados numéricos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Aplicativos Móveis , Equipe de Assistência ao Paciente/organização & administração , Estudos Prospectivos , Controle de Qualidade , Telecomunicações/instrumentação , Fatores de Tempo
18.
Am Surg ; 83(3): 308-313, 2017 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-28316317

RESUMO

We sought to identify a simple bedside method to predict successful extubation outcomes that might be used during rounds. We hypothesized that a direct 2-minute unassisted breathing evaluation (DTUBE) could replace a longer spontaneous breathing trial (SBT). Data were prospectively collected on all patients endotracheally intubated for >48 hours nearing extubation in a tertiary center's mixed trauma/surgical intensive care unit from August 2012 to August 2013. The SBT was performed for at least 30 minutes at 40 per cent FiO2, PEEP 5, and PS 8. DTUBE was performed by physically disconnecting the intubated patient from the ventilator circuit for a 2-minute period of direct observation on room air. Successful extubation was defined freedom from ventilator for greater than 72 hours. Both SBT and DTUBE were performed 128 times, resulting in 90 extubations. The DTUBE correctly predicted success in 75/79 (94.9%) extubations versus 82/89 (92.1%) via SBT. No adverse effects were directly attributed to the DTUBE. The DTUBE is a rapid method of evaluating patients for extubation with prediction accuracy similar to the SBT.


Assuntos
Extubação/métodos , Insuficiência Respiratória/fisiopatologia , Insuficiência Respiratória/terapia , APACHE , Feminino , Escala de Coma de Glasgow , Humanos , Unidades de Terapia Intensiva , Intubação Intratraqueal , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Estudos Prospectivos , Testes de Função Respiratória , Resultado do Tratamento
20.
Am Surg ; 82(3): 266-70, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27099064

RESUMO

Pleural effusions occur frequently in mechanically ventilated patients, but no consensus exists regarding the clinical benefit of effusion drainage. We sought to determine the impact of thoracentesis on gas exchange in patients with differing severities of acute lung injury (ALI). A retrospective analysis was conducted on therapeutic thoracenteses performed on intubated patients in an adult surgical intensive care unit of a tertiary center. Effusions judged by ultrasound to be 400 mL or larger were drained. Subjects were divided into groups based on their initial P:F ratios: normal >300, ALI 200 to 300, and acute respiratory distress syndrome (ARDS) <200. Baseline characteristics, physiologic variables, arterial blood gases, and ventilator settings before and after the intervention were analyzed. The primary end point was the change in measures of oxygenation. Significant improvements in P:F ratios (mean ± SD) were seen only in patients with ARDS (50.4 ± 38.5, P = 0.001) and ALI (90.6 ± 161.7, P = 0.022). Statistically significant improvement was observed in the pO2 (31.1, P = 0.005) and O2 saturation (4.1, P < 0.001) of the ARDS group. The volume of effusion removed did not correlate with changes in individual patient's oxygenation. These data support the role of therapeutic thoracentesis for intubated patients with abnormal P:F ratios.


Assuntos
Lesão Pulmonar Aguda/cirurgia , Intubação Intratraqueal , Derrame Pleural/cirurgia , Toracentese , Lesão Pulmonar Aguda/complicações , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Derrame Pleural/complicações , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA