Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
1.
Clin Transplant ; 38(5): e15319, 2024 May.
Article in English | MEDLINE | ID: mdl-38683684

ABSTRACT

OBJECTIVE: Longer end-stage renal disease time has been associated with inferior kidney transplant outcomes. However, the contribution of transplant evaluation is uncertain. We explored the relationship between time from evaluation to listing (ELT) and transplant outcomes. METHODS: This retrospective study included 2535 adult kidney transplants from 2000 to 2015. Kaplan-Meier survival curves, log-rank tests, and Cox regression models were used to compare transplant outcomes. RESULTS: Patient survival for both deceased donor (DD) recipients (p < .001) and living donor (LD) recipients (p < .0001) was significantly higher when ELT was less than 3 months. The risks of ELT appeared to be mediated by other risks in DD recipients, as adjusted models showed no associated risk of graft loss or death in DD recipients. For LD recipients, ELT remained a risk factor for patient death after covariate adjustment. Each month of ELT was associated with an increased risk of death (HR = 1.021, p = .04) but not graft loss in LD recipients in adjusted models. CONCLUSIONS: Kidney transplant recipients with longer ELT times had higher rates of death after transplant, and ELT was independently associated with an increased risk of death for LD recipients. Investigations on the impact of pretransplant evaluation on post-transplant outcomes can inform transplant policy and practice.


Subject(s)
Graft Survival , Kidney Failure, Chronic , Kidney Transplantation , Waiting Lists , Humans , Kidney Transplantation/mortality , Kidney Transplantation/adverse effects , Female , Male , Retrospective Studies , Middle Aged , Kidney Failure, Chronic/surgery , Follow-Up Studies , Risk Factors , Waiting Lists/mortality , Prognosis , Survival Rate , Adult , Graft Rejection/etiology , Graft Rejection/mortality , Tissue Donors/supply & distribution , Glomerular Filtration Rate , Kidney Function Tests , Living Donors/supply & distribution , Tissue and Organ Procurement , Time Factors , Postoperative Complications
2.
Transplant Direct ; 8(7): e1343, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35747522

ABSTRACT

Recent events of racial injustice prompted us to study potential impact of removing race from kidney donor risk index (KDRI) calculator. Methods: We used Scientific Registry for Transplant Recipients data to analyze outcomes of 66 987 deceased-donor kidney transplants performed in the United States between 2010 and 2016. Graft failure (GF) was defined as death or return to dialysis or requiring repeat transplant. We compared original KDRI and a race-free KDRI (Black donor coefficient zeroed out in the KDRI formula) with respect to recategorization of perceived GF risk (based on KDPI categories: ≤20, 21-34, 35-85, ≥86)' risk discrimination (using the C statistic) and predictive accuracy (using Brier score), and GF risk prediction (using Cox regression on time-to-GF). We used logistic regression to study the impact of donor race on discard probability. Results: There were 10 949 (16.3% of recipients) GF, and 1893 (17% of GFs) were among recipients of kidneys from Black donors. The use of race-free KDRI resulted in reclassification of 49% of kidneys from Black donors into lower GF risk categories. The impact on GF risk discrimination was minimal, with a relative decrease in C statistic of 0.16% and a change in GF predictive accuracy of 0.07%. For a given recipient/donor combination, transplants from Black (compared with non-Black) donors are estimated to decrease predicted graft survival at 1-y by 0.3%-3%, and 5-y by 1%-6%. Kidneys from Black donors are significantly more likely to be discarded (odds ratio adjusted for KDRI except race = 1.24). We estimate that an equal discard probability for Black and non-Black donors would yield 70 additional kidney transplants annually from Black donors. Conclusions: Use of race-free KDRI did not impact GF risk discrimination or predictive accuracy and may lower discard of kidneys from Black donors. We recommend use of race-free KDRI calculator acknowledging the possibility of miscalculation of GF risk in small proportion of kidneys from Black donors.

3.
Transplantation ; 105(12): 2596-2605, 2021 12 01.
Article in English | MEDLINE | ID: mdl-33950636

ABSTRACT

BACKGROUND: The 125I-iothalamate clearance and 99mTc diethylenetriamine-pentaacetic acid (99mTc-DTPA) split scan nuclear medicine studies are used among living kidney donor candidates to determine measured glomerular filtration rate (mGFR) and split scan ratio (SSR). The computerized tomography-derived cortical volume ratio (CVR) is a novel measurement of split kidney function and can be combined with predonation estimated GFR (eGFR) or mGFR to predict postdonation kidney function. Whether predonation SSR predicts postdonation kidney function better than predonation CVR and whether predonation mGFR provides additional information beyond predonation eGFR are unknown. METHODS: We performed a single-center retrospective analysis of 204 patients who underwent kidney donation between June 2015 and March 2019. The primary outcome was 1-y postdonation eGFR. Model bases were created from a measure of predonation kidney function (mGFR or eGFR) multiplied by the proportion that each nondonated kidney contributed to predonation kidney function (SSR or CVR). Multivariable elastic net regression with 1000 repetitions was used to determine the mean and 95% confidence interval of R2, root mean square error (RMSE), and proportion overprediction ≥15 mL/min/1.73 m2 between models. RESULTS: In validation cohorts, eGFR-CVR models performed best (R2, 0.547; RMSE, 9.2 mL/min/1.73 m2, proportion overprediction 3.1%), whereas mGFR-SSR models performed worst (R2, 0.360; RMSE, 10.9 mL/min/1.73 m2, proportion overprediction 7.2%) (P < 0.001 for all comparisons). CONCLUSIONS: These findings suggest that predonation CVR may serve as an acceptable alternative to SSR during donor evaluation and furthermore, that a model based on CVR and predonation eGFR may be superior to other methods.


Subject(s)
Kidney Transplantation , Nuclear Medicine , Glomerular Filtration Rate , Humans , Iodine Radioisotopes , Kidney/diagnostic imaging , Kidney Transplantation/adverse effects , Kidney Transplantation/methods , Living Donors , Retrospective Studies , Tomography, X-Ray Computed
4.
Ann Transplant ; 25: e922178, 2020 Sep 15.
Article in English | MEDLINE | ID: mdl-32929057

ABSTRACT

BACKGROUND Peripheral vascular disease and iliac arterial calcification are prevalent in kidney transplant candidates and jeopardize graft outcomes. We report our experience with computed tomography (CT) screening for iliac arterial calcification. MATERIAL AND METHODS We retrospectively reviewed electronic medical records of 493 renal transplant candidates from protocol initiation in 2014. Non-contrast CT was performed or retrospectively reviewed if any of the following criteria were present: diabetes, ESRD >6 years, 25 pack-years of smoking or current smoker, diagnosis of peripheral vascular disease, parathyroidectomy, and coronary artery disease intervention. Differences in evaluation and transplant outcomes between groups were compared with chi-squared analysis. Multivariate logistic regression identified predictive criteria for presence of iliac arterial calcification. RESULTS Of 493 candidates evaluated, CTs were reviewed in 346 (70.2%). Iliac arterial calcification was identified in 119 screened candidates (34.4%). Of candidates with iliac arterial calcification identified on CT, 16 (13.4%) were excluded for CT findings, and 9 (7.6%) had their surgical management plan changed. Overall, 91 (76.5%) candidates with iliac arterial calcification on CT were approved, compared to 203 (89.4%) without calcification (P<0.001). The percentage of screened patients with iliac arterial calcification on CT increased with increasing age (P<0.0005). Age and diabetes mellitus were predictive of calcification. CONCLUSIONS Many kidney transplant candidates are at risk for iliac arterial calcification, although such calcification does not prevent transplantation for most candidates who have it. Algorithmic pre-operative screening has clinical value in determining transplant candidacy and potentially improving postoperative outcomes in patients requiring kidney transplantation.


Subject(s)
Iliac Artery/diagnostic imaging , Kidney Transplantation , Vascular Calcification/diagnostic imaging , Female , Humans , Male , Middle Aged , Retrospective Studies , Tomography, X-Ray Computed
5.
Lifetime Data Anal ; 26(3): 451-470, 2020 07.
Article in English | MEDLINE | ID: mdl-31576491

ABSTRACT

In evaluating the benefit of a treatment on survival, it is often of interest to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. In many practical settings, treatment is time-dependent in the sense that subjects typically begin follow-up untreated, with some going on to receive treatment at some later time point. In observational studies, treatment is not assigned at random and, therefore, may depend on various patient characteristics. We have developed semi-parametric matching methods to estimate the average treatment effect on the treated (ATT) with respect to survival probability and restricted mean survival time. Matching is based on a prognostic score which reflects each patient's death hazard in the absence of treatment. Specifically, each treated patient is matched with multiple as-yet-untreated patients with similar prognostic scores. The matched sets do not need to be of equal size, since each matched control is weighted in order to preserve risk score balancing across treated and untreated groups. After matching, we estimate the ATT non-parametrically by contrasting pre- and post-treatment weighted Nelson-Aalen survival curves. A closed-form variance is proposed and shown to work well in simulation studies. The proposed methods are applied to national organ transplant registry data.


Subject(s)
Survival Analysis , Treatment Outcome , Computer Simulation , Humans , Prognosis , Statistics, Nonparametric
6.
J Surg Res ; 248: 69-81, 2020 04.
Article in English | MEDLINE | ID: mdl-31865161

ABSTRACT

BACKGROUND: Kidneys from acute renal failure (ARF), expanded criteria donors (ECD), and donation after cardiac death (DCD) donors are often discarded due to concerns for delayed graft function (DGF) and graft failure. Induction immunosuppression may be used to minimize these risks, but practices vary widely. Furthermore, little is known regarding national outcomes of transplant recipients receiving induction immunosuppression for receipt of high-risk kidneys. MATERIALS AND METHODS: Using a center-level retrospective study, deceased donor transplants (115,485) from the Scientific Registry of Transplant Recipients from January 2003 to June 2016 were evaluated. Patients who received induction immunosuppression, including lymphocyte immune globulin, muromonab CD-3, IL-1 receptor antagonist, anti-thymocyte globulin, daclizumab, basiliximab, alemtuzumab, and rituximab, were included. Associations of center-level induction use with acute rejection in the first post-transplant year, graft failure, and patient mortality were evaluated using multivariable Cox and logistic regression. RESULTS: Among all kidneys, increasing percentage of center-level induction was associated with lower risk of graft failure, acute rejection, and patient mortality. In recipients of ARF kidneys, the beneficial association of induction on graft failure and acute rejection was greater than in those that received non-ARF kidneys. Marginally greater benefit of induction was seen for acute rejection in ECD compared to standard criteria donor (SCD) recipients and for graft failure in DCD compared to donors after brain death (DBD). No benefit of induction was detected for patient and graft survival in ECD recipients, acute rejection in DCD recipients, and patient survival in DGF recipients. No difference in the benefit of induction was detected in any other comparisons. CONCLUSIONS: While seemingly beneficial for recipients of all kidneys, induction has more robust associations with lower graft failure and acute rejection probability for recipients of ARF kidneys. Given the lack of observed benefit for ECD recipients, induction policies should be carefully considered in these patients.


Subject(s)
Death , Immunosuppression Therapy , Kidney Transplantation , Transplantation Immunology , Adult , Allografts , Female , Humans , Male , Middle Aged , Retrospective Studies , Young Adult
8.
Clin Transplant ; 33(6): e13542, 2019 06.
Article in English | MEDLINE | ID: mdl-30887610

ABSTRACT

BACKGROUND: Intraoperative fluid management during laparoscopic donor nephrectomy (LDN) may have a significant effect on donor and recipient outcomes. We sought to quantify variability in fluid management and investigate its impact on donor and recipient outcomes. METHODS: A retrospective review of patients who underwent LDN from July 2011 to January 2016 with paired kidney recipients at a single center was performed. Patients were divided into tertiles of intraoperative fluid management (standard, high, and aggressive). Donor and recipient demographics, intraoperative data, and postoperative outcomes were analyzed. RESULTS: Overall, 413 paired kidney donors and recipients were identified. Intraoperative fluid management (mL/h) was highly variable with no correlation to donor weight (kg) (R = 0.017). The aggressive fluid management group had significantly lower recipient creatinine levels on postoperative day 1. However, no significant differences were noted in creatinine levels out to 6 months between groups. No significant differences were noted in recipient postoperative complications, graft loss, and death. There was a significant increase (P < 0.01) in the number of total donor complications in the aggressive fluid management group. CONCLUSIONS: Aggressive fluid management during LDN does not improve recipient outcomes and may worsen donor outcomes compared to standard fluid management.


Subject(s)
Fluid Therapy/mortality , Intraoperative Care/mortality , Kidney Failure, Chronic/surgery , Kidney Transplantation/mortality , Laparoscopy/mortality , Nephrectomy/mortality , Postoperative Complications/mortality , Adult , Female , Follow-Up Studies , Glomerular Filtration Rate , Humans , Kidney Function Tests , Living Donors , Male , Middle Aged , Prognosis , Retrospective Studies , Risk Factors , Survival Rate , Tissue and Organ Harvesting , Transplant Recipients
10.
Transplantation ; 103(8): 1714-1721, 2019 08.
Article in English | MEDLINE | ID: mdl-30451742

ABSTRACT

BACKGROUND: The Kidney Donor Risk Index (KDRI) is a score applicable to deceased kidney donors which reflects relative graft failure risk associated with deceased donor characteristics. The KDRI is widely used in kidney transplant outcomes research. Moreover, an abbreviated version of KDRI is the basis, for allocation purposes, of the "top 20%" designation for deceased donor kidneys. Data upon which the KDRI model was based used kidney transplants performed between 1995 and 2005. Our purpose in this report was to evaluate the need to update the coefficients in the KDRI formula, with the objective of either (a) proposing new coefficients or (b) endorsing continued used of the existing formula. METHODS: Using data obtained from the Scientific Registry of Transplant Recipients, we analyzed n = 156069 deceased donor adult kidney transplants occurring from 2000 to 2016. Cox regression was used to model the risk of graft failure. We then tested for differences between the original and updated regression coefficients and compared the performance of the original and updated KDRI formulas with respect to discrimination and predictive accuracy. RESULTS: In testing for equality between the original and updated KDRIs, few coefficients were significantly different. Moreover, the original and updated KDRI yielded very similar risk discrimination and predictive accuracy. CONCLUSIONS: Overall, our results indicate that the original KDRI is robust and is not meaningfully improved by an update derived through modeling analogous to that originally employed.


Subject(s)
Graft Rejection/epidemiology , Kidney Transplantation/statistics & numerical data , Registries , Risk Assessment/methods , Tissue Donors/statistics & numerical data , Transplant Recipients/statistics & numerical data , Waiting Lists , Adult , Graft Survival , Humans , Incidence , Middle Aged , Retrospective Studies , United States/epidemiology
11.
Am J Surg ; 217(2): 373-381, 2019 02.
Article in English | MEDLINE | ID: mdl-30224072

ABSTRACT

BACKGROUND: The impact of fellowship training on general surgery residency has remained challenging to assess. Surgical resident perceptions of fellow-led and resident-led surgical services have not been well described. METHODS: Retrospective cross-sectional data were collected from residents' service evaluations from 7/2014 through 7/2017. Surgical services were categorized as resident-led or fellow-led. 31 variables were evaluated and collapsed into 7 factors including clinical experience, educational experiences, clinical staff, workload, feedback, treatment of residents, and overall rotation. RESULTS: Among all PGY levels, fellow-led surgical services were rated significantly higher (p < 0.05) regarding clinical experience, clinical staff, treatment of residents, and overall rotation. PGY1-2 residents rated resident-led services significantly higher in the area of educational experiences, while PGY 3 residents rated resident-led services higher in the area of workload. However, PGY4-5 residents rated fellow-led services significantly higher in all 7 categories. Individual fellow-led services were rated significantly higher for various categories at different PGY levels. CONCLUSIONS: Surgical residents appear to value the educational experiences of fellow-led services. Each fellow-led service may ultimately provide unique educational opportunities and resources for different PGY levels.


Subject(s)
Education, Medical, Graduate/methods , Educational Measurement/methods , General Surgery/education , Internship and Residency/methods , Perception , Problem-Based Learning/methods , Surgeons/psychology , Clinical Competence , Cross-Sectional Studies , Humans , Retrospective Studies , Surgeons/education
12.
Exp Clin Transplant ; 17(4): 470-477, 2019 08.
Article in English | MEDLINE | ID: mdl-30381050

ABSTRACT

OBJECTIVES: Long-term outcomes of kidney transplant recipients with postoperative genitourinary tract infections are not well characterized. In this single center retrospective study, we aimed to investigate the long-term effects of early posttransplant genitourinary infections under a protocol that included 1 month of antibiotic prophylaxis on graft failure and patient outcomes. MATERIALS AND METHODS: Electronic medical records of 1752 recipients of kidney-alone transplant between January 2000 and December 2008 were reviewed. Of these, 344 patients had postoperative genitourinary tract infections within 6 months of transplant. Infections included urinary tract infections, recurrent urinary tract infections, and pyelonephritis. All patients received 1-month of treatment with antibiotic prophylaxis for genitourinary infections after graft placement. Kaplan-Meier survival curves and multivariable regression modeling were performed to determine survival outcomes. RESULTS: In the 344 patients with postoperative infections, the most common cause was Escherichia coli (34.9%). Kaplan-Meier graft survival results showed no significant differences (P = .08) among those with and those without postoperative urinary tract infections; however, patient survival (P = .01) was significantly different. Multivariate analysis demonstrated no significant trend regarding graft failure (hazard ratio: 1.28; 95% confidence interval, 0.95-1.71; P = .09) or patient death (hazard ratio: 1.33; 95% confidence interval, 0.98-1.79; P = .06) in patients with and without genitourinary infections. The major cause of graft failure was infection in the infection cohort (17.4%). CONCLUSIONS: Kidney transplant recipients who develop urinary tract infections within 6 months of transplant may be at increased risk of graft failure or patient death; however, further studies are needed to elucidate the relationship between posttransplant infections and long-term outcomes.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Kidney Transplantation/adverse effects , Reproductive Tract Infections/drug therapy , Urinary Tract Infections/drug therapy , Adult , Electronic Health Records , Female , Graft Survival , Humans , Kidney Transplantation/mortality , Male , Middle Aged , Reproductive Tract Infections/diagnosis , Reproductive Tract Infections/microbiology , Reproductive Tract Infections/mortality , Retrospective Studies , Risk Assessment , Risk Factors , Time Factors , Treatment Outcome , Urinary Tract Infections/diagnosis , Urinary Tract Infections/microbiology , Urinary Tract Infections/mortality , Young Adult
13.
Transplantation ; 102(10): 1636-1649, 2018 10.
Article in English | MEDLINE | ID: mdl-29847502

ABSTRACT

Since the implementation of the Model of End-stage Liver Disease score-based allocation system, the number of transplant candidates with impaired renal function has increased. The aims of this review are to present new insights in the definitions and predisposing factors that result in acute kidney injury (AKI), and to propose guidelines for the prevention and treatment of postliver transplantation (LT) AKI. This review is based on both systematic review of relevant literature and expert opinion. Pretransplant AKI is associated with posttransplant morbidity, including prolonged post-LT AKI which then predisposes to posttransplant chronic kidney disease. Prevention of posttransplant AKI is essential in the improvement of long-term outcomes. Accurate assessment of baseline kidney function at evaluation is necessary, taking into account that serum creatinine overestimates glomerular filtration rate. New diagnostic criteria for AKI have been integrated with traditional approaches in patients with cirrhosis to potentially identify AKI earlier and improve outcomes. Delayed introduction or complete elimination of calcineurin inhibitors during the first weeks post-LT in patients with early posttransplant AKI may improve glomerular filtration rate in high risk patients but with higher rates of rejection and more adverse events. Biomarkers may in the future provide diagnostic information such as etiology of AKI, and prognostic information on renal recovery post-LT, and potentially impact the decision for simultaneous liver-kidney transplantation. Overall, more attention should be paid to pretransplant and early posttransplant AKI to reduce the burden of late chronic kidney disease.


Subject(s)
Acute Kidney Injury/diagnosis , End Stage Liver Disease/surgery , Liver Transplantation/adverse effects , Renal Insufficiency, Chronic/prevention & control , Transplant Recipients , Acute Kidney Injury/blood , Acute Kidney Injury/etiology , Acute Kidney Injury/therapy , Biomarkers/blood , Calcineurin Inhibitors/adverse effects , Creatinine/blood , Disease Progression , End Stage Liver Disease/physiopathology , Glomerular Filtration Rate , Graft Rejection/prevention & control , Humans , Kidney/physiopathology , Liver Transplantation/methods , Liver Transplantation/standards , Practice Guidelines as Topic , Prognosis , Renal Insufficiency, Chronic/etiology , Renal Replacement Therapy , Severity of Illness Index , Time Factors
14.
Clin Transplant ; 32(3): e13189, 2018 03.
Article in English | MEDLINE | ID: mdl-29292535

ABSTRACT

OBJECTIVE: Peritoneal dialysis (PD) patients have equivalent or slightly better kidney transplant outcomes when compared to hemodialysis (HD) patients. However, given the risk for postoperative infection, we sought to determine the risk factors for PD catheter-associated infections for patients who do not have the PD catheter removed at the time of engraftment. METHODS: Demographic and outcomes data were collected from 313 sequential PD patients who underwent kidney transplant from 2000 to 2015. Risk factors for postoperative peritonitis were analyzed using logistical regression. RESULTS: Of 329 patients with PD catheters at transplant, 16 PD catheters were removed at engraftment. Of the remaining 313 patients, 8.9% suffered post-transplant peritonitis. On univariate analysis, patients with peritonitis were significantly more likely to have used the PD catheter or HD within 6 weeks after transplant. Multivariate analysis had similar findings, with increased risk for those using the PD catheter after transplant, with a trend for those who underwent HD only within 6 weeks of transplant. CONCLUSION: These results suggest that delayed graft function requiring any type of dialysis is associated with increased post-transplant peritonitis risk.


Subject(s)
Catheters, Indwelling/adverse effects , Kidney Failure, Chronic/surgery , Kidney Transplantation/adverse effects , Peritoneal Dialysis/adverse effects , Peritonitis/etiology , Postoperative Complications , Adult , Female , Follow-Up Studies , Glomerular Filtration Rate , Humans , Kidney Function Tests , Male , Middle Aged , Prognosis , Retrospective Studies , Risk Factors
15.
Am J Surg ; 215(1): 144-150, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28882358

ABSTRACT

BACKGROUND: We report our experience with metabolic syndrome screening for obese living kidney donor candidates to mitigate the long-term risk of CKD. METHODS: We retrospectively reviewed 814 obese (BMI≥30) and 993 nonobese living kidney donor evaluations over 12 years. Using logistic regression, we explored interactions between social/clinical variables and candidate acceptance before and after policy implementation. RESULTS: Obese donor candidate acceptance decreased after metabolic syndrome screening began (56.3%, 46.3%, p < 0.01), while nonobese candidate acceptance remained similar (59.6%, 59.2%, p = 0.59). Adjusting for age, gender, race, BMI, and number of prior evaluations, acceptance of obese candidates decreased significantly more than nonobese (p = 0.025). In candidates without metabolic syndrome, there was no significant change in how age, sex, race, or BMI affected a donor candidate's probability of acceptance. CONCLUSION: Metabolic syndrome screening is a simple stratification tool for centers with liberal absolute BMI cut-offs to exclude potentially higher-risk obese candidates.


Subject(s)
Donor Selection/methods , Kidney Transplantation , Living Donors , Metabolic Syndrome/diagnosis , Obesity/complications , Adult , Donor Selection/trends , Female , Humans , Living Donors/statistics & numerical data , Logistic Models , Male , Metabolic Syndrome/epidemiology , Metabolic Syndrome/etiology , Practice Guidelines as Topic , Retrospective Studies , Risk Assessment , Risk Factors
16.
J Hepatol ; 67(3): 517-525, 2017 09.
Article in English | MEDLINE | ID: mdl-28483678

ABSTRACT

BACKGROUND & AIM: The goal of organ allocation is to distribute a scarce resource equitably to the sickest patients. In the United States, the Model for End-stage Liver Disease (MELD) is used to allocate livers for transplantation. Patients with greater MELD scores are at greater risk of death on the waitlist and are prioritized for liver transplant (LT). The MELD is capped at 40 however, and patients with calculated MELD scores >40 are not prioritized despite increased mortality. We aimed to evaluate waitlist and post-transplant survival stratified by MELD to determine outcomes in patients with MELD >40. METHODS: Using United Network for Organ Sharing data, we identified patients listed for LT from February 2002 through to December 2012. Waitlist candidates with MELD ⩾40 were followed for 30days or until the earliest occurrence of death or transplant. RESULTS: Of 65,776 waitlisted patients, 3.3% had MELD ⩾40 at registration, and an additional 7.3% had MELD scores increase to ⩾40 after waitlist registration. A total of 30,369 (46.2%) underwent LT, of which 2,615 (8.6%) had MELD ⩾40 at transplant. Compared to MELD 40, the hazard ratio of death within 30days of registration was 1.4 (95% CI 1.2-1.6) for patients with MELD 41-44, 2.6 (95% CI 2.1-3.1) for MELD 45-49, and 5.0 (95% CI 4.1-6.1) for MELD ⩾50. There was no difference in 1- and 3-year survival for patients transplanted with MELD >40 compared to MELD=40. A survival benefit associated with LT was seen as MELD increased above 40. CONCLUSIONS: Patients with MELD >40 have significantly greater waitlist mortality but comparable post-transplant outcomes to patients with MELD=40 and, therefore, should be given priority for LT. Uncapping the MELD will allow more equitable organ distribution aligned with the principle of prioritizing patients most in need. Lay summary: In the United States (US), organs for liver transplantation are allocated by an objective scoring system called the Model for End-stage Liver Disease (MELD), which aims to prioritize the sickest patients for transplant. The greater the MELD score, the greater the mortality without liver transplant. The MELD score, however, is artificially capped at 40 and thus actually disadvantages the sickest patients with end-stage liver disease. Analysis of the data advocates uncapping the MELD score to appropriately prioritize the patients most in need of a liver transplant.


Subject(s)
End Stage Liver Disease/surgery , Liver Transplantation , Tissue and Organ Procurement , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Liver Transplantation/mortality , Male , Middle Aged , Waiting Lists , Young Adult
17.
Transplant Direct ; 3(1): e123, 2017 Jan.
Article in English | MEDLINE | ID: mdl-28349123

ABSTRACT

BACKGROUND: Long-term outcomes of kidney transplantation recipients with percutaneous ureteral management of transplant ureteral complications are not well characterized. METHODS: Electronic records of 1753 recipients of kidney-alone transplant between January 2000 and December 2008 were reviewed. One hundred thirty-one patients were identified to have undergone percutaneous ureteral management, with placement of percutaneous nephrostomy tube or additional intervention (nephroureteral stenting and/or balloon dilation). Indications for intervention included transplant ureteral stricture or ureteral leak. Kaplan-Meier survival curves and multivariable regression modeling were performed to determine survival outcomes. RESULTS: Kaplan- Meier graft survival (P = 0.04) was lower in patients with percutaneous ureteral intervention for transplant ureteral complication. Graft survival at 1, 5, and 10 years was 94.3% 78.3%, and 59.1% for no intervention and 97.2%, 72.1%, and 36.2% for intervention cohort. Patient survival (P = 0.69) was similar between cohorts. Multivariate analysis demonstrated no association with graft failure (hazard ratio, 1.21; 95% confidence interval, 0.67-2.19; P = 0.53) or patient death (hazard ratio, 0.56; 95% confidence interval, 0.22-1.41; P = 0.22) in intervention group. The major cause of graft failure was infection for percutaneous ureteral intervention group (20.4%) and chronic rejection for those without intervention (17.3%). CONCLUSIONS: Kidney transplant recipients with percutaneous ureteral interventions for ureteral complications do not have a significant difference in graft and patient survival outcomes. Therefore, aggressive nonoperative management can be confidently pursued in the appropriate clinical setting.

18.
Hum Immunol ; 78(2): 57-63, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27894836

ABSTRACT

BACKGROUND: The Luminex® single antigen bead assay (SAB) is the method of choice for monitoring the treatment for antibody-mediated rejection (AMR). A ⩾50% reduction of the dominant donor-specific antibody (IgG-DSA) mean fluorescence intensity (MFI) has been associated with improved kidney allograft survival, and C1q-fixing DSA activity is associated with poor outcomes in patients with AMR. We aimed to investigate if C1q-DSA can be used as a reliable predictor of response to therapy and allograft survival in patients with biopsy-proven AMR. METHODS: We tested pre- and post-treatment sera of 30 kidney transplant patients receiving plasmapheresis and low-dose IVIG for biopsy-proven AMR. IgG-DSA and C1q-DSA MFI were measured and correlated with graft loss or survival. Patients were classified as nonresponders (NR) when treatment resulted in <50% reduction in MFI of IgG-DSA and/or C1q-DSA was detectable following therapy. RESULTS: Differences in the percentage of patients deemed NR depended upon the end-point criterion (73% by reduction in IgG-DSA MFI vs. 50% by persistent C1q-DSA activity). None of the seven patients with <50% reduction of IgG-DSA but non-detectable C1q-DSA-fixing activity after therapy experienced graft loss, suggesting that C1q-DSA activity may better correlate with response. Reduction of C1q-DSA activity predicted graft survival better than IgG-DSA in the univariate Cox analysis (20.1% vs. 5.9% in NR; log-rank P-value=0.0147). CONCLUSIONS: A rapid reduction of DSA concentration below the threshold required for complement activation is associated with better graft survival, and C1q-DSA is a better predictor of outcomes than IgG-DSA MFI reduction.


Subject(s)
Complement C1q/metabolism , Graft Rejection/diagnosis , Graft Survival , Isoantibodies/metabolism , Kidney Transplantation , Adult , Antibody-Dependent Cell Cytotoxicity , Complement C1q/immunology , Complement Hemolytic Activity Assay , Female , Follow-Up Studies , Graft Rejection/mortality , Graft Rejection/prevention & control , HLA Antigens/immunology , Humans , Immunoglobulins, Intravenous/therapeutic use , Male , Middle Aged , Prognosis , Survival Analysis , Young Adult
19.
Adv Chronic Kidney Dis ; 23(5): 332-339, 2016 09.
Article in English | MEDLINE | ID: mdl-27742389

ABSTRACT

Transplantation is one of the most highly regulated fields in health care. An important component of transplant oversight is the performance assessment of transplant centers as measured by 1-year patient and graft survival outcomes. The use of the Organ Procurement and Transplantation Network and Scientific Registry of Transplant Recipients flagging mechanism for quality improvement as criteria for Center for Medicare and Medicaid Services certification has resulted in greater importance in transplant program operations. Although supporters of this program of encouraging Quality Assurance and Performance Improvement point to improved survival outcomes for more than the decade, others assert that the oversight is unnecessarily punitive, results in tremendous resource utilization, and discourages innovation. Data exist to support an inhibitory effect on national transplant volume. Although survival outcomes are risk adjusted, limitations on national data collection prevent several important risks from being incorporated into the models. This has led to the consensus that many transplant centers have become increasingly risk averse in this environment, which may indirectly reduce access to transplant for candidates who could still benefit from transplantation. Recently enacted modifications to performance evaluation by Center for Medicare and Medicaid Services and the Organ Procurement and Transplantation Network appear to acknowledge these concerns and have the potential to recalibrate transplant center focus away from first-year outcomes and more toward expanding transplant volume, innovation, and overall improvements in care.


Subject(s)
Federal Government , Kidney Transplantation/legislation & jurisprudence , Quality Improvement/organization & administration , Tissue and Organ Procurement/legislation & jurisprudence , Centers for Medicare and Medicaid Services, U.S. , Humans , United States
20.
Transfusion ; 56(12): 3073-3080, 2016 12.
Article in English | MEDLINE | ID: mdl-27601087

ABSTRACT

BACKGROUND: Therapeutic plasma exchange (TPE) is increasingly used for treatment of antibody-mediated rejection (AMR) after solid organ transplants. There is concern that TPE may increase risk of bleeding, although data are limited. After TPE, clot-based coagulation tests may not accurately represent the levels of coagulation factors due to the effect of citrate. We investigated protein levels of fibrinogen using antigen detection method (FibAg) and correlated results with a clot-based fibrinogen activity test (Fib). STUDY DESIGN AND METHODS: Nine kidney transplant recipients who received TPE for AMR were investigated. Fib, FibAg, prothrombin time/international normalized ratio (PT/INR), partial thromboplastin time (PTT), coagulation factor X chromogenic activity (CFX), and ionized calcium (iCa) were measured at pre- and post-TPE and 1, 3, 6, 9, 24, and 48 hours after the first TPE. RESULTS: Mean Fib/FibAg ratio before TPE was 1.08; therefore, all Fib values were normalized (n) by dividing by 1.08. Overall, the mean normalized Fib (nFib)/FibAg ratio at post-TPE was 0.89 and returned to close to 1.0 at 6 hours after the first TPE. Decreases in nFib, FibAg, and CFX and increases in PT/INR and PTT post-TPE were observed. The lowest Fib, FibAg, CFX, platelet, and iCa levels were still at levels that would be considered sufficient for hemostasis at all time points. CONCLUSION: The mean nFib/FibAg ratio after TPE was 0.89 and normalized in 6 hours, which demonstrates a persistent effect of citrate for up to 6 hours. Therefore, similar data observed in clot-based tests of PT/INR and PTT may be falsely elevated up to 6 hours after TPE due to the citrate effect.


Subject(s)
Blood Coagulation/drug effects , Citric Acid/pharmacology , Kidney Transplantation/adverse effects , Plasma Exchange/adverse effects , Blood Coagulation Tests/standards , Fibrinogen/analysis , Hemostasis/drug effects , Humans , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...