RESUMO
OBJECTIVE: Currently, there is no consensus protocol on the initial staging evaluation for Langerhans cell histiocytosis (LCH). Our institutional protocol consists of a skeletal survey and a whole-body positron emission tomography with 2-deoxy-2-[fluorine-18] fluoro-D-glucose integrated with computed tomography (FDG PET/CT) study. The utility of the PET/CT lies in its sensitivity in detecting osseous and extra-osseous lesions, and in determining the baseline metabolic activity of LCH lesions to assess treatment response. However, the added utility of the skeletal survey in staging LCH is unclear. Therefore, this study retrospectively assessed the added diagnostic value of skeletal surveys in patients with baseline PET/CTs for initial staging of LCH. METHODS: We retrospectively searched the medical records of all patients less than or equal to 18 years old at a large children's hospital (May 2013 to September 2021). The inclusion criteria were (a) biopsy-proven diagnosis of LCH and (b) initial staging PET/CT and skeletal survey performed less than or equal to 1 month apart. A blinded pediatric radiologist reviewed the skeletal surveys and another reviewed the PET/CTs in identifying LCH osseous lesions. RESULTS: Our study cohort consisted of 49 children with 86 LCH osseous lesions. In non-extremity locations, PET/CT identified 70/70 (100%) osseous lesions, while skeletal surveys detected 43/70 (61.4%) osseous lesions. In the extremities, PET/CT identified 13/16 (81.3%) osseous lesions, while skeletal surveys detected 15/16 (93.8%) osseous lesions. CONCLUSION: Skeletal surveys increased the detection rate of osseous lesions in the extremities, but added no diagnostic value to the detection of osseous lesions in non-extremity locations. Therefore, we propose to abbreviate the skeletal survey to include only extremity radiographs.
Assuntos
Fluordesoxiglucose F18 , Histiocitose de Células de Langerhans , Criança , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Estudos Retrospectivos , Tomografia por Emissão de Pósitrons , Histiocitose de Células de Langerhans/terapia , Compostos Radiofarmacêuticos , Estadiamento de NeoplasiasRESUMO
Background: Prenatal alcohol exposure (PAE) has teratogenic effects on numerous body systems including the heart. However, research magnetic resonance imaging (MRI) studies in humans with PAE have thus far been limited to the brain. This study aims to use MRI to examine heart structure and function, brain volumes, and body composition in children and adolescents with PAE. Methods: Heart, brain, and abdominal 3T MRI of 17 children, adolescents, and young adults with PAE and 53 unexposed controls was acquired to measure: (1) left ventricular ejection fraction, end-diastolic volume, end-systolic volume, stroke volume, cardiac output, longitudinal strain, circumferential strain, and heart mass; (2) total brain, cerebellum, white matter, grey matter, caudate, thalamus, putamen, and globus pallidus volumes; and (3) subcutaneous fat, visceral fat, muscle fat, and muscle (body composition). Results: Cardiac MRI revealed no abnormalities in the PAE group on evaluation by a paediatric cardiologist and by statistical comparison with a control group. Cardiac parameters in both groups were in line with previous reports, including expected sex- and age-related differences. Cerebellum, caudate, and globus pallidus volumes were all smaller. Body mass index and subcutaneous fat percent were higher in females with PAE relative to control females, but lower in males with PAE relative to control males. Conclusions: Children with PAE did not have abnormalities in MRI-derived measures of cardiac structure or function despite smaller brain volumes and sex-specific differences in body composition relative to healthy controls.
Contexte: L'exposition prénatale à l'alcool (EPA) engendre des effets tératogènes dans de nombreux systèmes et organes du corps humain, notamment le cÅur. Cependant, la recherche à l'aide de l'imagerie par résonance magnétique (IRM) chez des humains ayant des antécédents d'EPA s'est limitée aux effets sur le cerveau jusqu'à maintenant. Cette étude vise à utiliser l'IRM pour examiner la fonction et la structure du cÅur, le volume de diverses parties du cerveau ainsi que la composition corporelle chez des enfants et des adolescents ayant des antécédents d'EPA. Méthodologie: Chez 17 enfants, adolescents et jeunes adultes ayant été exposés à l'alcool au stade prénatal et chez 53 personnes n'ayant pas d'antécédents d'EPA, des images du cÅur, du cerveau et de l'abdomen ont été acquises par la technique d'IRM 3T afin de mesurer : i) la fraction d'éjection du ventricule gauche, le volume télédiastolique, le volume télésystolique, le volume de sang éjecté, le débit cardiaque, la déformation longitudinale, la déformation circonférentielle et la masse cardiaque; ii) les volumes du cerveau en entier, du cervelet, de la substance blanche, de la substance grise, du noyau caudé, du thalamus, du putamen et du globus pallidus; et iii) le pourcentage de tissu adipeux contenu sous la peau, dans les viscères et dans les muscles ainsi que le pourcentage de muscles (composition corporelle). Résultats: Les images obtenues par l'IRM cardiaque n'ont pas révélé d'anomalies chez le groupe ayant des antécédents d'EPA après évaluation par un cardiologue pédiatrique et comparaison statistique avec le groupe témoin. Les paramètres cardiaques mesurés chez les deux groupes reflétaient les données ayant été précédemment rapportées, y compris les attentes liées aux différences quant au sexe et à l'âge. Les volumes du cervelet, du noyau caudé et du globus pallidus étaient diminués chez les personnes ayant des antécédents d'EPA. Alors que l'indice de masse corporelle et le pourcentage de tissu adipeux sous-cutané étaient plus élevés chez les personnes de sexe féminin ayant des antécédents d'EPA que chez les personnes de sexe féminin appartenant au groupe témoin, ces mêmes paramètres se trouvaient diminués chez les personnes de sexe masculin ayant des antécédents d'EPA comparativement aux personnes de sexe masculin appartenant au groupe témoin. Conclusions: Chez les enfants ayant des antécédents d'EPA, les mesures de la fonction et de la structure cardiaques dérivées des données de l'IRM ne présentaient pas d'anomalies, bien qu'une diminution du volume de certaines parties du cerveau et des différences dans la composition corporelle propres au sexe aient été observées dans ce groupe comparativement aux personnes en santé appartenant au groupe témoin.
RESUMO
OBJECTIVE: This study aimed to evaluate the safety and tolerability of a fixed-dose co-formulation of ciprofloxacin and celecoxib (PrimeC) in patients with amyotrophic lateral sclerosis (ALS), and to examine its effects on disease progression and ALS-related biomarkers. METHODS: In this proof of concept, open-label, phase IIa study of PrimeC in 15 patients with ALS, participants were administered PrimeC thrice daily for 12 months. The primary endpoints were safety and tolerability. Exploratory endpoints included disease progression outcomes such as forced vital capacity, revised ALS functional rating scale, and effect on algorithm-predicted survival. In addition, indications of a biological effect were assessed by selected biomarker analyses, including TDP-43 and LC3 levels in neuron-derived exosomes (NDEs), and serum neurofilaments. RESULTS: Four participants experienced adverse events (AEs) related to the study drug. None of these AEs were unexpected, and most were mild or moderate (69%). Additionally, no serious AEs were related to the study drug. One participant tested positive for COVID-19 and recovered without complications, and no other abnormal laboratory investigations were found. Participants' survival compared to their predictions showed no safety concerns. Biomarker analyses demonstrated significant changes associated with PrimeC in neural-derived exosomal TDP-43 levels and levels of LC3, a key autophagy marker. INTERPRETATION: This study supports the safety and tolerability of PrimeC in ALS. Biomarker analyses suggest early evidence of a biological effect. A placebo-controlled trial is required to disentangle the biomarker results from natural progression and to evaluate the efficacy of PrimeC for the treatment of ALS. Summary for social media if publishedTwitter handles: @NeurosenseT, @ShiranZimriâ¢What is the current knowledge on the topic? ALS is a severe neurodegenerative disease, causing death within 2-5 years from diagnosis. To date there is no effective treatment to halt or significantly delay disease progression.â¢What question did this study address? This study assessed the safety, tolerability and exploratory efficacy of PrimeC, a fixed dose co-formulation of ciprofloxacin and celecoxib in the ALS population.â¢What does this study add to our knowledge? This study supports the safety and tolerability of PrimeC in ALS, and exploratory biomarker analyses suggest early insight for disease related-alteration.â¢How might this potentially impact the practice of neurology? These results set the stage for a larger, placebo-controlled study to examine the efficacy of PrimeC, with the potential to become a new drug candidate for ALS.
Assuntos
Esclerose Lateral Amiotrófica , COVID-19 , Doenças Neurodegenerativas , Humanos , Esclerose Lateral Amiotrófica/diagnóstico , Esclerose Lateral Amiotrófica/tratamento farmacológico , Biomarcadores , Celecoxib/uso terapêutico , Progressão da Doença , Proteínas de Ligação a DNA , Método Duplo-Cego , Ciprofloxacina/uso terapêuticoRESUMO
Introduction: The edaravone development program for amyotrophic lateral sclerosis (ALS) included trials MCI186-16 (Study 16) and MCI186-19 (Study 19). A cohort enrichment strategy was based on a Study 16 post hoc analysis and applied to Study 19 to elucidate a treatment effect in that study. To determine whether the Study 19 results could be generalized to a broader ALS population, we used a machine learning (ML) model to create a novel risk-based subgroup analysis tool. Methods: A validated ML model was used to rank order all Study 16 participants by predicted time to 50% expected vital capacity. Subjects were stratified into nearest-neighbor risk-based subgroups that were systematically expanded to include the entire Study 16 population. For each subgroup, a statistical analysis generated heat maps that revealed statistically significant effect sizes. Results: A broad region of the Study 16 heat map with significant effect sizes was identified, including up to 70% of the trial population. Incorporating participants identified in the cohort enrichment strategy yielded a broad group comprising 76% of the original participants with a statistically significant treatment effect. This broad group spanned the full range of the functional score progression observed in Study 16. Conclusions: This analysis, applying predictions derived using an ML model to a novel methodology for subgroup identification, ascertained a statistically significant edaravone treatment effect in a cohort of participants with broader disease characteristics than the Study 19 inclusion criteria. This novel methodology may assist clinical interpretation of study results and potentially inform efficient future clinical trial design strategies.
Assuntos
Esclerose Lateral Amiotrófica , Esclerose Lateral Amiotrófica/tratamento farmacológico , Método Duplo-Cego , Edaravone/uso terapêutico , Humanos , Aprendizado de Máquina , Capacidade VitalRESUMO
BACKGROUND: We have always been searching for the ideal local anesthetic for outpatient spinal anesthesia. Lidocaine has been associated with a high incidence of transient neurological symptoms, and bupivacaine produces sensory and motor blocks of long duration. Preservative-free 2-chloroprocaine (2-CP) seems to be a promising alternative, being a short-acting agent of increasing popularity in recent years. This study was designed to compare 2-CP with bupivacaine for spinal anesthesia in an elective ambulatory setting. METHODS: A total of 106 patients were enrolled in this randomized double-blind study. Spinal anesthesia was achieved with 0.75% hyperbaric bupivacaine 7.5 mg (n = 53) or 2% preservative-free 2-CP 40 mg (n = 53). The primary endpoint for the study was the time until reaching eligibility for discharge. Secondary outcomes included the duration of the sensory and motor blocks, the length of stay in the postanesthesia care unit, the time until ambulation, and the time until micturition. RESULTS: The average time to discharge readiness was 277 min in the 2-CP group and 353 min in the bupivacaine group, a difference of 76 min (95% confidence interval [CI]: 40 to 112 min; P < 0.001). The average time for complete regression of the sensory block was 146 min in the 2-CP group and 329 min in the bupivacaine group, a difference of 185 min (95% CI: 159 to 212 min; P < 0.001). Times to ambulation and micturition were also significantly lower in the 2-CP group. CONCLUSION: Spinal 2-chloroprocaine provides adequate duration and depth of surgical anesthesia for short procedures with the advantages of faster block resolution and earlier hospital discharge compared with spinal bupivacaine. (ClinicalTrials.gov number, NCT00845962).
Assuntos
Procedimentos Cirúrgicos Ambulatórios , Raquianestesia/métodos , Anestésicos Locais/farmacologia , Bupivacaína/farmacologia , Procaína/análogos & derivados , Adulto , Idoso , Bupivacaína/efeitos adversos , Método Duplo-Cego , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Procaína/efeitos adversos , Procaína/farmacologia , Fatores de TempoRESUMO
Introduction: Vital capacity (VC) is routinely used for ALS clinical trial eligibility determinations, often to exclude patients unlikely to survive trial duration. However, spirometry has been limited by the COVID-19 pandemic. We developed a machine-learning survival model without the use of baseline VC and asked whether it could stratify clinical trial participants and a wider ALS clinic population. Methods. A gradient boosting machine survival model lacking baseline VC (VC-Free) was trained using the PRO-ACT ALS database and compared to a multivariable model that included VC (VCI) and a univariable baseline %VC model (UNI). Discrimination, calibration-in-the-large and calibration slope were quantified. Models were validated using 10-fold internal cross validation, the VITALITY-ALS clinical trial placebo arm and data from the Emory University tertiary care clinic. Simulations were performed using each model to estimate survival of patients predicted to have a > 50% one year survival probability. Results. The VC-Free model suffered a minor performance decline compared to the VCI model yet retained strong discrimination for stratifying ALS patients. Both models outperformed the UNI model. The proportion of excluded vs. included patients who died through one year was on average 27% vs. 6% (VCI), 31% vs. 7% (VC-Free), and 13% vs. 10% (UNI). Conclusions. The VC-Free model offers an alternative to the use of VC for eligibility determinations during the COVID-19 pandemic. The observation that the VC-Free model outperforms the use of VC in a broad ALS patient population suggests the use of prognostic strata in future, post-pandemic ALS clinical trial eligibility screening determinations.
Assuntos
Esclerose Lateral Amiotrófica , COVID-19 , Esclerose Lateral Amiotrófica/epidemiologia , Humanos , Aprendizado de Máquina , Pandemias , SARS-CoV-2 , Capacidade VitalRESUMO
PURPOSE: Inkjet printers can be used to fabricate anthropomorphic phantoms by the use of iodine-doped ink. However, challenges persist in implementing this technique. The calibration from grayscale to ink density is complex and time-consuming. The purpose of this work is to develop a printing methodology that requires a simpler calibration and is less dependent on printer characteristics to produce the desired range of x-ray attenuation values. METHODS: Conventional grayscale printing was substituted by single-tone printing; that is, the superposition of pure black layers of iodinated ink. Printing was performed with a consumer-grade inkjet printer using ink made of potassium-iodide (KI) dissolved in water at 1 g/ml. A calibration for the attenuation of ink was measured using a commercial x-ray system at 70 kVp. A neonate radiograph obtained at 70 kVp served as an anatomical model. The attenuation map of the neonate radiograph was processed into a series of single-tone images. Single-tone images were printed, stacked, and imaged at 70 kVp. The phantom was evaluated by comparing attenuation values between the printed phantom and the original radiograph; attenuation maps were compared using the structural similarity index measure (SSIM), while attenuation histograms were compared using the Kullback-Leibler (KL) divergence. A region of interest (ROI)-based analysis was also performed, where the attenuation distribution within given ROIs was compared between phantom and patient. The phantom sharpness was evaluated in terms of modulation transfer function (MTF) estimates and signal spread profiles of high spatial resolution features in the image. RESULTS: The printed phantom required 36 pages. The printing queue was automated and it took about 2 h to print the phantom. The radiograph of the printed phantom demonstrated a close resemblance to the original neonate radiograph. The SSIM of the phantom with respect to that of the patient was 0.53. Both patient and phantom attenuation histograms followed similar distributions, and the KL divergence between such histograms was 0.20. The ROI-based analysis showed that the largest deviations from patient attenuation values were observed at the higher and lower ends of the attenuation range. The limiting resolution of the proposed methodology was about 1 mm. CONCLUSION: A methodology to generate a neonate phantom for 2D imaging applications, using single-tone printing, was developed. This method only requires a single-value calibration and required less than 2 h to print a complete phantom.
Assuntos
Modelos Anatômicos , Impressão Tridimensional , Calibragem , Humanos , Recém-Nascido , Imagens de Fantasmas , Radiografia , Raios XRESUMO
Background: We wished to determine whether newly available flat panel detector (FPD) c-arms were (1) associated with lower radiation dose during ureteroscopy (URS) than conventional image intensifier (CII) c-arms and (2) to compare fluoroscopic image quality between the units. Materials and Methods: We retrospectively reviewed 44 consecutive patients undergoing URS at a pediatric hospital, with c-arms assigned by availability in the operating room. We performed dosimetry experiments using the same c-arms on standard phantoms. Results: Patient and case characteristics did not differ significantly between the two groups of patients. The median dose in the FPD group was less than a quarter of the dose in the CII group, 0.48 [0.42, 0.97] mGy vs 2.2 [1.1, 3.8] mGy, p < 0.0001. The FPD dose remained at less than one-third of the CII dose accounting for any difference in fluoroscopy time, and remained significant in a multivariate model including fluoroscopy time and patient weight (ß = 2.4, p = 0.007). Phantom studies showed higher image quality for FPDs at all simulated patient sizes, even at lower radiation doses. Conclusions: This is the first report comparing radiation dose from c-arms of image intensifiers and FPDs in adults or children. Use of an FPD during URS was associated with a substantially decreased absorbed dose for patients while simultaneously improving image quality.
Assuntos
Exposição à Radiação , Ureteroscopia , Criança , Fluoroscopia , Humanos , Imagens de Fantasmas , Doses de Radiação , Intensificação de Imagem Radiográfica , Estudos RetrospectivosRESUMO
BACKGROUND: In our experience, correction of coagulation defects with plasma transfusion does not decrease the need for intraoperative red blood cell (RBC) transfusions during liver transplantation. On the contrary, it leads to a hypervolemic state that result in increased blood loss. A previous study has shown that plasma transfusion has been associated with a decreased 1-year survival rate. The aim of this prospective study was to evaluate whether anesthesiologists could reduce RBC transfusion requirements during liver transplantation by eliminating plasma transfusion. METHODS: Two hundred consecutive liver transplantations were prospectively studied over a 3-year period. Patients were divided into two groups: low starting international normalized ratio (INR) value <1.5 and high INR > or =1.5. Low central venous pressure was maintained in all patients before the anhepatic phase. Coagulation parameters were not corrected preoperatively or intraoperatively in the absence of uncontrollable bleeding. Phlebotomy and auto transfusion of blood salvaged were used following our protocol. Independent variables were analyzed in both univariate and multivariate fashion to find a link with RBC transfusions or decreased survival rate. RESULTS: The mean number of intraoperative RBC units transfused was 0.3+/-0.8. Plasma, platelet, albumin, and cryoprecipitate were not transfused. In 81.5% of the patients, no blood product was used during their transplantation. The average final hemoglobin (Hb) value was 91.2+/-15.0 g/L. There were no differences in transfusional rate, final Hb, or bleeding between two groups (low or high INR values). The overall 1-year survival rate was 85.6%. Logistic regression showed that avoidance of plasma transfusion, phlebotomy, and starting Hb value were significantly linked to liver transplantation without RBC transfusion. The need for intraoperative RBC transfusion and Pugh's score were linked to the decreased 1-year survival rate. CONCLUSION: The avoidance of plasma transfusion was associated with a decrease in RBC transfusions during liver transplantation. There was no link between coagulation defects and bleeding or RBC or plasma transfusions. Previous reports indicating that it is neither useful nor necessary to correct coagulation defects with plasma transfusion before liver transplantation seem further corroborated by this study. We believe that this work also supports the practice of lowering central venous pressure with phlebotomy to reduce blood loss, during liver dissection, without any deleterious effect.
Assuntos
Transtornos da Coagulação Sanguínea/sangue , Transfusão de Componentes Sanguíneos/estatística & dados numéricos , Período Intraoperatório , Transplante de Fígado/métodos , Adulto , Idoso , Creatinina/sangue , Transfusão de Eritrócitos/estatística & dados numéricos , Feminino , Humanos , Coeficiente Internacional Normatizado , Masculino , Pessoa de Meia-Idade , Contagem de Plaquetas , Reoperação/estatística & dados numéricos , Estudos RetrospectivosRESUMO
INTRODUCTION: In small trials, randomization can fail, leading to differences in patient characteristics across treatment arms, a risk that can be reduced by stratifying using key confounders. In ALS trials, riluzole use (RU) and bulbar onset (BO) have been used for stratification. We hypothesized that randomization could be improved by using a multifactorial prognostic score of predicted survival as a single stratifier. METHODS: We defined a randomization failure as a significant difference between treatment arms on a characteristic. We compared randomization failure rates when stratifying for RU and BO ("traditional stratification") to failure rates when stratifying for predicted survival using a predictive algorithm. We simulated virtual trials using the PRO-ACT database without application of a treatment effect to assess balance between cohorts. We performed 100 randomizations using each stratification method - traditional and algorithmic. We applied these stratification schemes to a randomization simulation with a treatment effect using survival as the endpoint and evaluated sample size and power. RESULTS: Stratification by predicted survival met with fewer failures than traditional stratification. Stratifying predicted survival into tertiles performed best. Stratification by predicted survival was validated with an external dataset, the placebo arm from the BENEFIT-ALS trial. Importantly, we demonstrated a substantial decrease in sample size required to reach statistical power. CONCLUSIONS: Stratifying randomization based on predicted survival using a machine learning algorithm is more likely to maintain balance between trial arms than traditional stratification methods. The methodology described here can translate to smaller, more efficient clinical trials for numerous neurological diseases.
RESUMO
OBJECTIVES: Death in amyotrophic lateral sclerosis (ALS) patients is related to respiratory failure, which is assessed in clinical settings by measuring vital capacity. We developed ALS-VC, a modeling tool for longitudinal prediction of vital capacity in ALS patients. METHODS: A gradient boosting machine (GBM) model was trained using the PRO-ACT (Pooled Resource Open-access ALS Clinical Trials) database of over 10,000 ALS patient records. We hypothesized that a reliable vital capacity predictive model could be developed using PRO-ACT. RESULTS: The model was used to compare FVC predictions with a 30-day run-in period to predictions made from just baseline. The internal root mean square deviations (RMSD) of the run-in and baseline models were 0.534 and 0.539, respectively, across the 7L FVC range captured in PRO-ACT. The RMSDs of the run-in and baseline models using an unrelated, contemporary external validation dataset (0.553 and 0.538, respectively) were comparable to the internal validation. The model was shown to have similar accuracy for predicting SVC (RMSD = 0.562). The most important features for both run-in and baseline models were "Baseline forced vital capacity" and "Days since baseline." CONCLUSIONS: We developed ALS-VC, a GBM model trained with the PRO-ACT ALS dataset that provides vital capacity predictions generalizable to external datasets. The ALS-VC model could be helpful in advising and counseling patients, and, in clinical trials, it could be used to generate virtual control arms against which observed outcomes could be compared, or used to stratify patients into slowly, average, and rapidly progressing subgroups.
Assuntos
Esclerose Lateral Amiotrófica/complicações , Insuficiência Respiratória/diagnóstico , Insuficiência Respiratória/etiologia , Capacidade Vital/fisiologia , Bases de Dados Factuais/estatística & dados numéricos , Progressão da Doença , Feminino , Humanos , Estudos Longitudinais , Masculino , Modelos Estatísticos , Valor Preditivo dos Testes , Fatores de TempoRESUMO
OBJECTIVE: To test the safety, tolerability, and urate-elevating capability of the urate precursor inosine taken orally or by feeding tube in people with amyotrophic lateral sclerosis (ALS). METHODS: This was a pilot, open-label trial in 25 participants with ALS. Treatment duration was 12 weeks. The dose of inosine was titrated at pre-specified time points to elevate serum urate levels to 7-8 mg/dL. Primary outcomes were safety (as assessed by the occurrence of adverse events [AEs]) and tolerability (defined as the ability to complete the 12-week study on study drug). Secondary outcomes included biomarkers of oxidative stress and damage. As an exploratory analysis, observed outcomes were compared with a virtual control arm built using prediction algorithms to estimate ALSFRS-R scores. RESULTS: Twenty-four out of 25 participants (96%) completed 12 weeks of study drug treatment. One participant was unable to comply with study visits and was lost to follow-up. Serum urate rose to target levels in 6 weeks. No serious AEs attributed to study drug and no AEs of special concern, such as urolithiasis and gout, occurred. Selected biomarkers of oxidative stress and damage had significant changes during the study period. Observed changes in ALSFRS-R did not differ from baseline predictions. INTERPRETATION: Inosine appeared safe, well tolerated, and effective in raising serum urate levels in people with ALS. These findings, together with epidemiological observations and preclinical data supporting a neuroprotective role of urate in ALS models, provide the rationale for larger clinical trials testing inosine as a potential disease-modifying therapy for ALS.
Assuntos
Pressão Venosa Central/fisiologia , Complicações Intraoperatórias/mortalidade , Transplante de Fígado/mortalidade , Transfusão de Sangue/métodos , Transfusão de Sangue/mortalidade , Humanos , Complicações Intraoperatórias/fisiopatologia , Complicações Intraoperatórias/prevenção & controle , Transplante de Fígado/efeitos adversos , Taxa de SobrevidaRESUMO
BACKGROUND: Orthotopic liver transplantation (OLT) has been associated with major blood loss and the need for blood product transfusions. During the last decade, improved surgical and anesthetic management has reduced intraoperative blood loss and blood product transfusions. A first report from our group published in 2005 described a mean intraoperative transfusion rate of 0.3 red blood cell (RBC) unit per patient for 61 consecutive OLTs. Of these patients, 80.3% did not receive any blood product. The interventions leading to those results were a combination of fluid restriction, phlebotomy, liberal use of vasopressor medications, and avoidance of preemptive transfusions of fresh frozen plasma. This is a follow-up observational study, covering 500 consecutive OLTs. METHODS: Five hundred consecutive OLTs were studied. The transfusion rate of the first 61 OLTs was compared with the last 439 OLTs. Furthermore, multivariate logistic regression was used to determine the main predictors of intraoperative blood transfusion. RESULTS: A mean (SD) of 0.5 (1.3) RBC unit was transfused per patient for the 500 OLTs, and 79.6% of them did not receive any blood product. There was no intergroup difference except for the final hemoglobin (Hb) value, which was higher for the last 439 OLTs compared with the previously reported smaller study (94 [20] vs. 87 [20] g/L). Two variables, starting Hb value and phlebotomy, correlated with OLT without transfusion. CONCLUSIONS: In our center, a low intraoperative transfusion rate could be maintained throughout 500 consecutive OLTs. Bleeding did not correlate with the severity of recipient's disease. The starting Hb value showed the strongest correlation with OLT without RBC transfusion.
Assuntos
Transfusão de Componentes Sanguíneos/mortalidade , Perda Sanguínea Cirúrgica/mortalidade , Transfusão de Eritrócitos/mortalidade , Transplante de Fígado/mortalidade , Adulto , Idoso , Feminino , Seguimentos , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Morbidade , Análise Multivariada , Plasma , Estudos RetrospectivosRESUMO
BACKGROUND: Historically, orthotopic liver transplantation (OLT) has been associated with major blood loss and the need for blood product transfusions. Activation of the fibrinolytic system can contribute significantly to bleeding. Prophylactic administration of antifibrinolytic agents was found to reduce blood loss. METHODS: The efficacy of two antifibrinolytic compounds--aprotinin (AP) and tranexamic acid (TA)--was compared in OLT. Four hundred consecutive OLTs were studied: 300 patients received AP and 100 received TA. Multivariate logistic regression analysis was used to identify independent predictors of intraoperative transfusion requirement and 1-year patient mortality. RESULTS: There was no intergroup difference in intraoperative blood loss (1082±1056 vs. 1007±790 mL), red blood cell transfusion per patient (0.5±1.4 vs. 0.5±1.0), final hemoglobin (Hb) concentration (93±20 g/L vs. 95±22 g/L), the percentage of OLT cases requiring no blood product administration (80% vs. 82%), and 1-year survival (85.1% vs. 87.4%). Serum creatinine concentrations were also the same (116±55 vs. 119±36 µmol/L) 1 year after surgery. Two variables, starting Hb and phlebotomy, correlated with the two primary outcome measures (transfusion and 1-year survival). CONCLUSIONS: In our experience, administration of AP was not superior to TA with regards to blood loss and blood product transfusion requirement during OLT. In addition, we found no difference between the groups in the 1-year survival rate and renal function. Furthermore, we suggest that starting Hb concentration should be considered when prioritizing patients on the waiting list and planning perioperative care for OLT recipients.
Assuntos
Antifibrinolíticos/uso terapêutico , Aprotinina/uso terapêutico , Transfusão de Eritrócitos , Transplante de Fígado/mortalidade , Ácido Tranexâmico/uso terapêutico , Adulto , Idoso , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Taxa de SobrevidaRESUMO
BACKGROUND: Optimal modality of pain management after liver resection has been controversial. Epidural analgesia is often avoided because of transient coagulopathy and the associated risk of epidural hematoma. Single-dose intrathecal morphine has been shown to be an effective alternative in open liver resection. The purpose of this trial was to compare the analgesic efficacy of intrathecal morphine and fentanyl versus intrathecal bupivacaine 0.5%, morphine, and fentanyl for patients undergoing laparoscopic liver resection. METHODS: This prospective randomized controlled double-blind trial compared morphine consumption between control (CTRL) group receiving a spinal injection of fentanyl 15 µg and morphine 0.4 mg and bupivacaine (BUPI) group receiving the same medications in addition to bupivacaine 0.5% (15 mg). Forty patients scheduled for laparoscopic liver resection were enrolled. Primary outcome was intravenous patient-controlled analgesia morphine consumption measured at 6, 9, 12, 18, 24, 36, and 48 hrs after spinal injection. Secondary outcomes were pain scores at rest and with movement, sedation, nausea, pruritus, and respiratory rate. RESULTS: Cumulative doses of morphine were significantly lower for all time intervals in the BUPI group: 54 (30) versus 94 (47) mg (P = 0.01) at 48 hrs. Morphine consumption was significantly lower for each time interval up to 18 hrs. Pain scores with movement were significantly lower in the BUPI group up to 24 hrs after injection. Pain score at rest was significantly lower in the BUPI group 9 hrs after injection. There were no differences in adverse effects. CONCLUSIONS: The addition of bupivacaine to intrathecal morphine and fentanyl significantly reduced intravenous morphine consumption after laparoscopic liver resection.
Assuntos
Analgésicos Opioides/uso terapêutico , Anestésicos Locais/uso terapêutico , Bupivacaína/uso terapêutico , Fentanila/uso terapêutico , Laparoscopia , Fígado/cirurgia , Morfina/uso terapêutico , Dor Pós-Operatória/tratamento farmacológico , Adolescente , Adulto , Idoso , Analgesia Controlada pelo Paciente , Analgésicos Opioides/administração & dosagem , Anestésicos Locais/administração & dosagem , Bupivacaína/administração & dosagem , Método Duplo-Cego , Feminino , Fentanila/administração & dosagem , Humanos , Injeções Espinhais , Masculino , Pessoa de Meia-Idade , Morfina/administração & dosagem , Medição da Dor , Estudos Prospectivos , Adulto JovemRESUMO
OBJECTIVES: Continuous epidural analgesia may be considered in liver resection but is often avoided because of possible coagulopathies and the risk of epidural hematoma in the postoperative period. On the other hand, there is no coagulation defect during the surgery. Effective prevention of postoperative pain may require continuous sensory ablation throughout the surgery event. METHODS: A prospective, randomized, double-blind study was conducted to evaluate the efficacy of intraoperative epidural anesthesia on postoperative morphine consumption via patient-controlled analgesia after liver surgery in 2 groups of patients. One group (epidural) received, intraoperatively, thoracic epidural bupivacaine perfusion (0.5% at 3 mL/hr) added to preoperative intrathecal morphine (0.5 mg) and fentanyl (15 microg). The other group (placebo) was administered the same intrathecal narcotics but with a sham epidural. Forty-four patients scheduled for major liver resection (> or =2 segments) were recruited. Patient-controlled analgesia morphine consumption, pain at rest and with movement, sedation, nausea, pruritus, and respiratory frequency were evaluated at 6, 9, 12, 18, 24, 36, and 48 hrs after intrathecal morphine injection. RESULTS: Patients in the placebo group consumed twice as much morphine during each time interval than patients in the epidural group (at 48 hrs: 123 [SD, 46] vs 59 [SD, 25] mg; P < 0.0001). Pain evaluation on visual analog scale at rest and on movement was lower in the epidural group (P = 0.017 and P = 0.037). CONCLUSION: Intraoperative thoracic epidural infusion of bupivacaine, added to intrathecal morphine, decreased postoperative morphine consumption with better pain relief compared with the placebo.
Assuntos
Analgesia Epidural/métodos , Anestésicos Locais , Bupivacaína , Hepatectomia/métodos , Dor Pós-Operatória/prevenção & controle , Analgesia Controlada pelo Paciente/métodos , Analgésicos Opioides/uso terapêutico , Método Duplo-Cego , Feminino , Humanos , Cuidados Intraoperatórios , Masculino , Pessoa de Meia-Idade , Morfina/uso terapêutico , Medição da Dor , Estudos Prospectivos , Vértebras Torácicas , Fatores de Tempo , Resultado do TratamentoRESUMO
BACKGROUND: A regimen of fluid restriction, phlebotomy, vasopressors, and strict, protocol-guided product replacement has been associated with low blood product use during orthotopic liver transplantation. However, the physiologic basis of this strategy remains unclear. We hypothesized that a reduction of intravascular volume by phlebotomy would cause a decrease in portal venous pressure (PVP), which would be sustained during subsequent phenylephrine infusion, possibly explaining reduced bleeding. Because phenylephrine may increase central venous pressure (CVP), we questioned the validity of CVP as a correlate of cardiac filling in this context and compared it with other pulmonary artery catheter and transesophageal echocardiography-derived parameters. In particular, because optimal views for echocardiographic estimation of preload and stroke volume are not always applicable during liver transplantation, we evaluated the use of transmitral flow (TMF) early peak (E) velocity as a surrogate. METHODS: In study 1, the changes in directly measured PVP and CVP were recorded before and after phlebotomy and phenylephrine infusion in 10 patients near the end of the dissection phase of liver transplantation. In study 2, transesophageal echocardiography-derived TMF velocity in early diastole was measured in 20 patients, and the changes were compared with changes in CVP, pulmonary artery pressure (PAP), pulmonary capillary wedge pressure (PCWP), cardiac output (CO), and calculated systemic vascular resistance (SVR) at the following times: postinduction, postphlebotomy, preclamping of the inferior vena cava, during clamping, and postunclamping. RESULTS: Phlebotomy decreased PVP along with CO, PAP, PCWP, CVP, and TMF E velocity. Phenylephrine given after phlebotomy increased CVP, SVR, and arterial blood pressure but had no significant effect on CO, PAP, PCWP, or PVP. The change in TMF E velocity correlated well with the change in CO (Pearson correlation coefficient 95% confidence interval 0.738-0.917, P< or =0.015) but less well with the change in PAP (0.554-0.762, P< or =0.012) and PCWP (0.576-0.692, P< or =0.008). TMF E velocity did not correlate significantly with CVP or calculated SVR. CONCLUSION: Phlebotomy during the dissection phase of liver transplantation decreased PVP, which was unaffected when phenylephrine infusion was used to restore systemic arterial pressure. This may contribute to a decrease in operative blood loss. CVP often increased in response to phenylephrine infusion and did not seem to reflect cardiac filling. The changes in TMF E velocity correlated well with the changes in CO, PAP, and PCWP during liver transplantation but not with the changes in CVP.
Assuntos
Perda Sanguínea Cirúrgica/prevenção & controle , Hemodinâmica/efeitos dos fármacos , Transplante de Fígado/métodos , Fenilefrina/administração & dosagem , Flebotomia , Pressão na Veia Porta/efeitos dos fármacos , Vasoconstritores/administração & dosagem , Débito Cardíaco/efeitos dos fármacos , Cateterismo de Swan-Ganz , Pressão Venosa Central/efeitos dos fármacos , Ecocardiografia Doppler em Cores , Ecocardiografia Transesofagiana , Humanos , Infusões Intravenosas , Transplante de Fígado/efeitos adversos , Pessoa de Meia-Idade , Valva Mitral/diagnóstico por imagem , Monitorização Intraoperatória/métodos , Projetos Piloto , Pressão Propulsora Pulmonar/efeitos dos fármacos , Resistência Vascular/efeitos dos fármacosRESUMO
OBJECTIVES: Orthotopic liver transplantation (OLT) may be associated with major blood loss and equally considerable transfusion requirements. We had developed previously a model capable of predicting the probability of packed red blood cell (PRBC) transfusion. We tested the ability of that model in predicting the need for PRBC transfusion after its conversion into the nomogram format, which represents a friendly tool to be used. Moreover, the nomogram was validated in an independent cohort of 109 prospectively gathered OLTs. MATERIALS AND METHODS: A total of 515 OLTs were performed by a group of 17 anesthesiologists and 7 hepatobiliary surgeons. The initial series of 406 OLTs were used for model development. The remaining 109 OLTs were used as an independent validation cohort. Logistic regression analyses addressed the relationship between the three previously identified predictors of the likelihood of PRBC transfusion and the actual rate of PRBC transfusion. The predictors consisted of plasma transfusion status, phlebotomy, and immediate preoperative hemoglobin value. The regression coefficients from the multivariable logistic regression model that included all three predictors were used to develop a nomogram predicting the individual probability of PRBC transfusion. RESULTS: In univariable models, transfusion of plasma (odds ratio [OR] 15.0, P<0.001) increased the rate of PRBC transfusion. Conversely, phlebotomy (OR 0.06, P<0.001) and a high starting hemoglobin level (OR 0.95, P<0.001) had a protective effect. In the multivariable model, all three variables reached independent predictor status (P<0.001). The bootstrap-adjusted area under curve (AUC) of the model was 89.8%. CONCLUSION: Our nomogram represents the first model capable of predicting the individual risk of PRBC transfusion at OLT.
Assuntos
Perda Sanguínea Cirúrgica/prevenção & controle , Transfusão de Eritrócitos , Transplante de Fígado/efeitos adversos , Modelos Biológicos , Adulto , Área Sob a Curva , Transfusão de Componentes Sanguíneos/efeitos adversos , Hemoglobinas/análise , Humanos , Modelos Logísticos , Pessoa de Meia-Idade , Nomogramas , Razão de Chances , Flebotomia/efeitos adversos , Valor Preditivo dos Testes , Estudos Prospectivos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco , Fatores de RiscoRESUMO
BACKGROUND: Orthotopic liver transplantation has been traditionally associated with major blood loss and the need for allogenic blood product transfusions. In recent years, improvements in surgical and anesthetic techniques have greatly decreased the amount of blood products transfused. We have published a median of 0 for all intraoperative blood products transfused. Some authors argue that these results could be possible merely because of the relatively healthy cohort in terms of model of end-stage liver disease (MELD) score. The MELD score could be adjusted by some conditions (hepatocellular carcinoma, hemodialysis, hepatopulmonary syndrome, and amyloidosis) and was not adjusted in these series. The goal of this work was to verify the MELD score according to US standards and to find any link between the MELD score and the transfusion rate. METHOD: Three hundred fifty consecutive liver transplantations were studied. The MELD score was adjusted according to US standards. Patients were divided into two groups according to the median of the MELD score. Blood loss and transfusion rate were determined for these two groups. Logistic regression models were used to find any link with transfusion of red blood cell (RBC) units. RESULT: The MELD score before adjusting was 19+/-9 and 22+/-10 after. A mean of 0.5+/-1.3 RBC units/patient intraoperative were transfused with 80.6% of cases without any blood products. There was no difference for the blood loss (999+/-670 mL vs. 1017+/-885 mL) or the transfusion rate (0.4+/-1.2 vs. 0.5+/-1.4 RBC/patient) between two groups of MELD (<21 or >or=21) or any of its component (creatinine, bilirubin, and international normalized ratio). The logistic regression analysis found that only two variables were linked to RBC transfusion; starting hemoglobin value and phlebotomy. CONCLUSION: In this series, the MELD score was as high as US series and did not predict blood losses and blood product requirement during liver transplantation. If the MELD system has to be implemented to prioritize orthotopic liver transplantation, it should be revisited, and the starting hemoglobin value should be added to the equation.