RESUMO
BACKGROUND: Primary non-function (PNF) and early allograft failure (EAF) after liver transplantation (LT) seriously affect patient outcomes. In clinical practice, effective prognostic tools for early identifying recipients at high risk of PNF and EAF were urgently needed. Recently, the Model for Early Allograft Function (MEAF), PNF score by King's College (King-PNF) and Balance-and-Risk-Lactate (BAR-Lac) score were developed to assess the risks of PNF and EAF. This study aimed to externally validate and compare the prognostic performance of these three scores for predicting PNF and EAF. METHODS: A retrospective study included 720 patients with primary LT between January 2015 and December 2020. MEAF, King-PNF and BAR-Lac scores were compared using receiver operating characteristic (ROC) and the net reclassification improvement (NRI) and integrated discrimination improvement (IDI) analyses. RESULTS: Of all 720 patients, 28 (3.9%) developed PNF and 67 (9.3%) developed EAF in 3 months. The overall early allograft dysfunction (EAD) rate was 39.0%. The 3-month patient mortality was 8.6% while 1-year graft-failure-free survival was 89.2%. The median MEAF, King-PNF and BAR-Lac scores were 5.0 (3.5-6.3), -2.1 (-2.6 to -1.2), and 5.0 (2.0-11.0), respectively. For predicting PNF, MEAF and King-PNF scores had excellent area under curves (AUCs) of 0.871 and 0.891, superior to BAR-Lac (AUC = 0.830). The NRI and IDI analyses confirmed that King-PNF score had the best performance in predicting PNF while MEAF served as a better predictor of EAD. The EAF risk curve and 1-year graft-failure-free survival curve showed that King-PNF was superior to MEAF and BAR-Lac scores for stratifying the risk of EAF. CONCLUSIONS: MEAF, King-PNF and BAR-Lac were validated as practical and effective risk assessment tools of PNF. King-PNF score outperformed MEAF and BAR-Lac in predicting PNF and EAF within 6 months. BAR-Lac score had a huge advantage in the prediction for PNF without post-transplant variables. Proper use of these scores will help early identify PNF, standardize grading of EAF and reasonably select clinical endpoints in relative studies.
RESUMO
We investigated clinical information underneath the beat-to-beat fluctuation of the arterial blood pressure (ABP) waveform morphology. We proposed the Dynamical Diffusion Map algorithm (DDMap) to quantify the variability of morphology. The underlying physiology could be the compensatory mechanisms involving complex interactions between various physiological mechanisms to regulate the cardiovascular system. As a liver transplant surgery contains distinct periods, we investigated its clinical behavior in different surgical steps. Our study used DDmap algorithm, based on unsupervised manifold learning, to obtain a quantitative index for the beat-to-beat variability of morphology. We examined the correlation between the variability of ABP morphology and disease acuity as indicated by Model for End-Stage Liver Disease (MELD) scores, the postoperative laboratory data, and 4 early allograft failure (EAF) scores. Among the 85 enrolled patients, the variability of morphology obtained during the presurgical phase was best correlated with MELD-Na scores. The neohepatic phase variability of morphology was associated with EAF scores as well as postoperative bilirubin levels, international normalized ratio, aspartate aminotransferase levels, and platelet count. Furthermore, variability of morphology presents more associations with the above clinical conditions than the common BP measures and their BP variability indices. The variability of morphology obtained during the presurgical phase is indicative of patient acuity, whereas those during the neohepatic phase are indicative of short-term surgical outcomes.
Assuntos
Doença Hepática Terminal , Transplante de Fígado , Humanos , Pressão Arterial , Doença Hepática Terminal/cirurgia , Bilirrubina , Índice de Gravidade de Doença , Pressão Sanguínea , Estudos RetrospectivosRESUMO
Background and Aims: Increasing utilization of extended criteria donor leads to an increasing rate of early allograft failure after liver transplantation. However, consensus of definition of early allograft failure is lacking. Methods: A retrospective, multicenter study was performed to validate the Liver Graft Assessment Following Transplantation (L-GrAFT) risk model in a Chinese cohort of 942 adult patients undergoing primary liver transplantation at three Chinese centers. L-GrAFT (L-GrAFT7 and L-GrAFT10) was compared with existing models: the Early Allograft Failure Simplified Estimation (EASE) score, the model of early allograft function (MEAF), and the Early Allograft Dysfunction (EAD) model. Univariate and multivariate logistic regression were used to find risk factors of L-GrAFT high-risk group. Results: L-GrAFT7 had an area under the curve of 0.85 in predicting 90-day graft survival, significantly superior to MEAF [area under the curve (AUC=0.78, p=0.044)] and EAD (AUC=0.78, p=0.006), while there was no statistical significance between the predicting abilities of L-GrAFT7 and EASE (AUC=0.84, p>0.05). Furthermore, L-GrAFT7 maintains good predicting ability in the subgroup of high-donor risk index (DRI) cases (AUC=0.83 vs. MEAF, p=0.007 vs. EAD, p=0.014) and recipients of donors after cardiac death (AUC=0.92 vs. EAD, p<0.001). Through multivariate analysis, pretransplant bilirubin level, units of packed red blood cells, and the DRI score were selected as independent risk factors of a L-GrAFT7 high-risk group. Conclusions: The accuracy of L-GrAFT7 in predicting early allograft failure was validated in a Chinese multicenter cohort, indicating that it has the potential to become an accurate endpoint of clinical practice and transitional study of machine perfusion.