RESUMEN
In the US liver allocation system, nonstandardized model for end-stage liver disease (MELD) exceptions (NSEs) increase the waitlist priority of candidates whose MELD scores are felt to underestimate their true medical urgency. We determined whether NSEs accurately depict pretransplant mortality risk by performing mixed-effects Cox proportional hazards models and estimating concordance indices. We also studied the change in frequency of NSEs after the National Liver Review Board's implementation in May 2019. Between June 2016 and April 2022, 60,322 adult candidates were listed, of whom 10,280 (17.0%) received an NSE at least once. The mean allocation MELD was 23.9, an increase of 12.0 points from the mean laboratory MELD of 11.9 (P < .001). A 1-point increase in allocation MELD score due to an NSE was associated with, on average, a 2% reduction in hazard of pretransplant death (cause-specific hazard ratio: 0.98; 95% CI: 0.96, 1.00; P = .02) compared with those with the same laboratory MELD. Laboratory MELD was more accurate than allocation MELD with NSEs in rank-ordering candidates (c-index: 0.889 vs 0.857). The proportion of candidates with NSEs decreased significantly after the National Liver Review Board from 21.5% to 12.8% (P < .001). NSEs substantially increase the waitlist priority of candidates with objectively low medical urgency.
RESUMEN
Donation after circulatory death (DCD) is driving the increase in deceased organ donors in the United States. Normothermic regional perfusion (NRP) and ex situ machine perfusion (es-MP) have been instrumental in improving liver transplant outcomes and graft utilization. This study examines the current landscape of liver utilization from cardiac DCD donors in the United States. Using the United Network for Organ Sharing Standard Transplant Analysis and Research file, all adult (≥18 years old) DCD donors in the United States from which the heart was used for transplantation from October 1, 2020, to September 30, 2023, were compared by procurement technique (NRP versus super rapid recovery [SRR]) and storage strategy (es-MP versus static cold storage). One hundred eighty-eight livers were transplanted from 309 thoracoabdominal NRP donors (61% utilization) versus 305 (56%) liver transplants from 544 SRR donors. es-MP was used in 20% (n = 38) of NRP cases versus 32% (98) of SRR cases. Of the liver grafts, 281 (59%) were exposed to NRP, es-MP, or both. While there is widespread utilization of machine perfusion, more research is needed to determine optimal graft management strategies, particularly concerning the use of multiple technologies in complementary ways. More complete data collection is necessary at a national level to address these important research questions.
RESUMEN
RATIONALE & OBJECTIVE: The US Kidney Allocation System (KAS) prioritizes candidates with a≤20% estimated posttransplant survival (EPTS) to receive high-longevity kidneys defined by a≤20% Kidney Donor Profile Index (KDPI). Use of EPTS in the KAS deprioritizes candidates with older age, diabetes, and longer dialysis durations. We assessed whether this use also disadvantages race and ethnicity minority candidates, who are younger but more likely to have diabetes and longer durations of kidney failure requiring dialysis. STUDY DESIGN: Observational cohort study. SETTING & PARTICIPANTS: Adult candidates for and recipients of kidney transplantation represented in the Scientific Registry of Transplant Recipients from January 2015 through December 2020. EXPOSURE: Race and ethnicity. OUTCOME: Age-adjusted assignment to≤20% EPTS, transplantation of a≤20% KDPI kidney, and posttransplant survival in longevity-matched recipients by race and ethnicity. ANALYTIC APPROACH: Multivariable logistic regression, Fine-Gray competing risks survival analysis, and Kaplan-Meier and Cox proportional hazards methods. RESULTS: The cohort included 199,444 candidates (7% Asian, 29% Black, 19% Hispanic or Latino, and 43% White) listed for deceased donor kidney transplantation. Non-White candidates had significantly higher rates of diabetes, longer dialysis duration, and were younger than White candidates. Adjusted for age, Asian, Black, and Hispanic or Latino candidates had significantly lower odds of having a ETPS score of≤20% (odds ratio, 0.86 [95% CI, 0.81-0.91], 0.52 [95% CI, 0.50-0.54], and 0.49 [95% CI, 0.47-0.51]), and were less likely to receive a≤20% KDPI kidney (sub-hazard ratio, 0.70 [0.66-0.75], 0.89 [0.87-0.92], and 0.73 [0.71-0.76]) compared with White candidates. Among recipients with≤20% EPTS scores transplanted with a≤20% KDPI deceased donor kidney, Asian and Hispanic recipients had lower posttransplant mortality (HR, 0.45 [0.27-0.77] and 0.63 [0.47-0.86], respectively) and Black recipients had higher but not statistically significant posttransplant mortality (HR, 1.22 [0.99-1.52]) compared with White recipients. LIMITATIONS: Provider reported race and ethnicity data and 5-year post transplant follow-up period. CONCLUSIONS: The US kidney allocation system is less likely to identify race and ethnicity minority candidates as having a≤20% EPTS score, which triggers allocation of high-longevity deceased donor kidneys. These findings should inform the Organ Procurement and Transplant Network about how to remedy the race and ethnicity disparities introduced through KAS's current approach of allocating allografts with longer predicted longevity to recipients with longer estimated posttransplant survival. PLAIN-LANGUAGE SUMMARY: The US Kidney Allocation System prioritizes giving high-longevity, high-quality kidneys to patients on the waiting list who have a high estimated posttransplant survival (EPTS) score. EPTS is calculated based on the patient's age, whether the patient has diabetes, whether the patient has a history of organ transplantation, and the number of years spent on dialysis. Our analyses show that Asian, Black or African American, and Hispanic or Latino patients were less likely to receive high-longevity kidneys compared with White patients, despite having similar or better posttransplant survival outcomes.
Asunto(s)
Trasplante de Riñón , Obtención de Tejidos y Órganos , Humanos , Masculino , Femenino , Persona de Mediana Edad , Estados Unidos/epidemiología , Adulto , Estudios de Cohortes , Donantes de Tejidos , Fallo Renal Crónico/cirugía , Fallo Renal Crónico/etnología , Fallo Renal Crónico/mortalidad , Supervivencia de Injerto , Anciano , Etnicidad , Longevidad , Sistema de Registros , Grupos RacialesRESUMEN
Importance: The US heart allocation system prioritizes medically urgent candidates with a high risk of dying without transplant. The current therapy-based 6-status system is susceptible to manipulation and has limited rank ordering ability. Objective: To develop and validate a candidate risk score that incorporates current clinical, laboratory, and hemodynamic data. Design, Setting, and Participants: A registry-based observational study of adult heart transplant candidates (aged ≥18 years) from the US heart allocation system listed between January 1, 2019, and December 31, 2022, split by center into training (70%) and test (30%) datasets. Adult candidates were listed between January 1, 2019, and December 31, 2022. Main Outcomes and Measures: A US candidate risk score (US-CRS) model was developed by adding a predefined set of predictors to the current French Candidate Risk Score (French-CRS) model. Sensitivity analyses were performed, which included intra-aortic balloon pumps (IABP) and percutaneous ventricular assist devices (VAD) in the definition of short-term mechanical circulatory support (MCS) for the US-CRS. Performance of the US-CRS model, French-CRS model, and 6-status model in the test dataset was evaluated by time-dependent area under the receiver operating characteristic curve (AUC) for death without transplant within 6 weeks and overall survival concordance (c-index) with integrated AUC. Results: A total of 16â¯905 adult heart transplant candidates were listed (mean [SD] age, 53 [13] years; 73% male; 58% White); 796 patients (4.7%) died without a transplant. The final US-CRS contained time-varying short-term MCS (ventricular assist-extracorporeal membrane oxygenation or temporary surgical VAD), the log of bilirubin, estimated glomerular filtration rate, the log of B-type natriuretic peptide, albumin, sodium, and durable left ventricular assist device. In the test dataset, the AUC for death within 6 weeks of listing for the US-CRS model was 0.79 (95% CI, 0.75-0.83), for the French-CRS model was 0.72 (95% CI, 0.67-0.76), and 6-status model was 0.68 (95% CI, 0.62-0.73). Overall c-index for the US-CRS model was 0.76 (95% CI, 0.73-0.80), for the French-CRS model was 0.69 (95% CI, 0.65-0.73), and 6-status model was 0.67 (95% CI, 0.63-0.71). Classifying IABP and percutaneous VAD as short-term MCS reduced the effect size by 54%. Conclusions and Relevance: In this registry-based study of US heart transplant candidates, a continuous multivariable allocation score outperformed the 6-status system in rank ordering heart transplant candidates by medical urgency and may be useful for the medical urgency component of heart allocation.
Asunto(s)
Insuficiencia Cardíaca , Trasplante de Corazón , Obtención de Tejidos y Órganos , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Bilirrubina , Servicios de Laboratorio Clínico , Corazón , Factores de Riesgo , Medición de Riesgo , Insuficiencia Cardíaca/mortalidad , Insuficiencia Cardíaca/cirugía , Estados Unidos , Asignación de Recursos para la Atención de Salud/métodos , Valor Predictivo de las Pruebas , Obtención de Tejidos y Órganos/métodos , Obtención de Tejidos y Órganos/organización & administraciónRESUMEN
OBJECTIVES: A unilateral do-not-resuscitate (UDNR) order is a do-not-resuscitate order placed using clinician judgment which does not require consent from a patient or surrogate. This study assessed how UDNR orders were used during the COVID-19 pandemic. DESIGN: We analyzed a retrospective cross-sectional study of UDNR use at two academic medical centers between April 2020 and April 2021. SETTING: Two academic medical centers in the Chicago metropolitan area. PATIENTS: Patients admitted to an ICU between April 2020 and April 2021 who received vasopressor or inotropic medications to select for patients with high severity of illness. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The 1,473 patients meeting inclusion criteria were 53% male, median age 64 (interquartile range, 54-73), and 38% died during admission or were discharged to hospice. Clinicians placed do not resuscitate orders for 41% of patients ( n = 604/1,473) and UDNR orders for 3% of patients ( n = 51/1,473). The absolute rate of UDNR orders was higher for patients who were primary Spanish speaking (10% Spanish vs 3% English; p ≤ 0.0001), were Hispanic or Latinx (7% Hispanic/Latinx vs 3% Black vs 2% White; p = 0.003), positive for COVID-19 (9% vs 3%; p ≤ 0.0001), or were intubated (5% vs 1%; p = 0.001). In the base multivariable logistic regression model including age, race/ethnicity, primary language spoken, and hospital location, Black race (adjusted odds ratio [aOR], 2.5; 95% CI, 1.3-4.9) and primary Spanish language (aOR, 4.4; 95% CI, 2.1-9.4) had higher odds of UDNR. After adjusting the base model for severity of illness, primary Spanish language remained associated with higher odds of UDNR order (aOR, 2.8; 95% CI, 1.7-4.7). CONCLUSIONS: In this multihospital study, UDNR orders were used more often for primary Spanish-speaking patients during the COVID-19 pandemic, which may be related to communication barriers Spanish-speaking patients and families experience. Further study is needed to assess UDNR use across hospitals and enact interventions to improve potential disparities.
Asunto(s)
COVID-19 , Humanos , Masculino , Persona de Mediana Edad , Femenino , Órdenes de Resucitación , Estudios Retrospectivos , Estudios Transversales , PandemiasRESUMEN
BACKGROUND: Outcomes for pancreatic adenocarcinoma (PDAC) remain difficult to prognosticate. Multiple models attempt to predict survival following the resection of PDAC, but their utility in the neoadjuvant population is unknown. We aimed to assess their accuracy among patients that received neoadjuvant chemotherapy (NAC). METHODS: We performed a multi-institutional retrospective analysis of patients who received NAC and underwent resection of PDAC. Two prognostic systems were evaluated: the Memorial Sloan Kettering Cancer Center Pancreatic Adenocarcinoma Nomogram (MSKCCPAN) and the American Joint Committee on Cancer (AJCC) staging system. Discrimination between predicted and actual disease-specific survival was assessed using the Uno C-statistic and Kaplan-Meier method. Calibration of the MSKCCPAN was assessed using the Brier score. RESULTS: A total of 448 patients were included. There were 232 (51.8%) females, and the mean age was 64.1 years (±9.5). Most had AJCC Stage I or II disease (77.7%). For the MSKCCPAN, the Uno C-statistic at 12-, 24-, and 36-month time points was 0.62, 0.63, and 0.62, respectively. The AJCC system demonstrated similarly mediocre discrimination. The Brier score for the MSKCCPAN was 0.15 at 12 months, 0.26 at 24 months, and 0.30 at 36 months, demonstrating modest calibration. CONCLUSIONS: Current survival prediction models and staging systems for patients with PDAC undergoing resection after NAC have limited accuracy.
Asunto(s)
Adenocarcinoma , Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adenocarcinoma/cirugía , Carcinoma Ductal Pancreático/tratamiento farmacológico , Carcinoma Ductal Pancreático/cirugía , Terapia Neoadyuvante , Estadificación de Neoplasias , Nomogramas , Neoplasias Pancreáticas/tratamiento farmacológico , Neoplasias Pancreáticas/cirugía , Pronóstico , Estudios Retrospectivos , Neoplasias PancreáticasRESUMEN
The Organ Procurement and Transplant Network (OPTN) implemented a new heart allocation policy on October 18, 2018. Published estimates of lower posttransplant survival under the new policy in cohorts with limited follow-up may be biased by informative censoring. Using the Scientific Registry of Transplant Recipients, we used the Kaplan-Meier method to estimate 1-year posttransplant survival for pre-policy (November 1, 2016, to October 31, 2017) and post-policy cohorts (November 1, 2018, to October 31, 2019) with follow-up through March 2, 2021. We adjusted for changes in recipient population over time with a multivariable Cox proportional hazards model. To demonstrate the effect of inadequate follow-up on post-policy survival estimates, we repeated the analysis but only included follow-up through October 31, 2019. Transplant programs transplanted 2594 patients in the pre-policy cohort and 2761 patients in the post-policy cohort. With follow-up through March 2, 2021, unadjusted 1-year posttransplant survival was 90.6% (89.5%-91.8%) in the pre-policy cohort and 90.8% (89.7%-91.9%) in the post-policy cohort (adjusted HR = 0.93 [0.77-1.12]). Ignoring follow-up after October 31, 2019, the post-policy estimate was biased downward (1-year: 82.2%). When estimated with adequate follow-up, 1-year posttransplant survival under the new heart allocation policy was not significantly different.
Asunto(s)
Trasplante de Corazón , Obtención de Tejidos y Órganos , Humanos , Políticas , Sistema de Registros , Donantes de Tejidos , Receptores de TrasplantesRESUMEN
OBJECTIVES: Body temperature trajectories of infected patients are associated with specific immune profiles and survival. We determined the association between temperature trajectories and distinct manifestations of coronavirus disease 2019. DESIGN: Retrospective observational study. SETTING: Four hospitals within an academic healthcare system from March 2020 to February 2021. PATIENTS: All adult patients hospitalized with coronavirus disease 2019. INTERVENTIONS: Using a validated group-based trajectory model, we classified patients into four previously defined temperature trajectory subphenotypes using oral temperature measurements from the first 72 hours of hospitalization. Clinical characteristics, biomarkers, and outcomes were compared between subphenotypes. MEASUREMENTS AND MAIN RESULTS: The 5,903 hospitalized coronavirus disease 2019 patients were classified into four subphenotypes: hyperthermic slow resolvers (n = 1,452, 25%), hyperthermic fast resolvers (1,469, 25%), normothermics (2,126, 36%), and hypothermics (856, 15%). Hypothermics had abnormal coagulation markers, with the highest d-dimer and fibrin monomers (p < 0.001) and the highest prevalence of cerebrovascular accidents (10%, p = 0.001). The prevalence of venous thromboembolism was significantly different between subphenotypes (p = 0.005), with the highest rate in hypothermics (8.5%) and lowest in hyperthermic slow resolvers (5.1%). Hyperthermic slow resolvers had abnormal inflammatory markers, with the highest C-reactive protein, ferritin, and interleukin-6 (p < 0.001). Hyperthermic slow resolvers had increased odds of mechanical ventilation, vasopressors, and 30-day inpatient mortality (odds ratio, 1.58; 95% CI, 1.13-2.19) compared with hyperthermic fast resolvers. Over the course of the pandemic, we observed a drastic decrease in the prevalence of hyperthermic slow resolvers, from representing 53% of admissions in March 2020 to less than 15% by 2021. We found that dexamethasone use was associated with significant reduction in probability of hyperthermic slow resolvers membership (27% reduction; 95% CI, 23-31%; p < 0.001). CONCLUSIONS: Hypothermics had abnormal coagulation markers, suggesting a hypercoagulable subphenotype. Hyperthermic slow resolvers had elevated inflammatory markers and the highest odds of mortality, suggesting a hyperinflammatory subphenotype. Future work should investigate whether temperature subphenotypes benefit from targeted antithrombotic and anti-inflammatory strategies.
Asunto(s)
Temperatura Corporal , COVID-19/patología , Hipertermia/patología , Hipotermia/patología , Fenotipo , Centros Médicos Académicos , Anciano , Antiinflamatorios/uso terapéutico , Biomarcadores/sangre , Coagulación Sanguínea , Estudios de Cohortes , Dexametasona/uso terapéutico , Femenino , Humanos , Inflamación , Masculino , Persona de Mediana Edad , Puntuaciones en la Disfunción de Órganos , Estudios Retrospectivos , SARS-CoV-2RESUMEN
RATIONALE: Variation in hospital mortality has been described for coronavirus disease 2019 (COVID-19), but the factors that explain these differences remain unclear. OBJECTIVE: Our objective was to utilize a large, nationally representative dataset of critically ill adults with COVID-19 to determine which factors explain mortality variability. METHODS: In this multicenter cohort study, we examined adults hospitalized in intensive care units with COVID-19 at 70 United States hospitals between March and June 2020. The primary outcome was 28-day mortality. We examined patient-level and hospital-level variables. Mixed-effects logistic regression was used to identify factors associated with interhospital variation. The median odds ratio (OR) was calculated to compare outcomes in higher- vs. lower-mortality hospitals. A gradient boosted machine algorithm was developed for individual-level mortality models. MEASUREMENTS AND MAIN RESULTS: A total of 4,019 patients were included, 1537 (38%) of whom died by 28 days. Mortality varied considerably across hospitals (0-82%). After adjustment for patient- and hospital-level domains, interhospital variation was attenuated (OR decline from 2.06 [95% CI, 1.73-2.37] to 1.22 [95% CI, 1.00-1.38]), with the greatest changes occurring with adjustment for acute physiology, socioeconomic status, and strain. For individual patients, the relative contribution of each domain to mortality risk was: acute physiology (49%), demographics and comorbidities (20%), socioeconomic status (12%), strain (9%), hospital quality (8%), and treatments (3%). CONCLUSION: There is considerable interhospital variation in mortality for critically ill patients with COVID-19, which is mostly explained by hospital-level socioeconomic status, strain, and acute physiologic differences. Individual mortality is driven mostly by patient-level factors. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Asunto(s)
Algoritmos , COVID-19/epidemiología , Enfermedad Crítica/terapia , Unidades de Cuidados Intensivos/estadística & datos numéricos , Anciano , Comorbilidad , Enfermedad Crítica/epidemiología , Femenino , Estudios de Seguimiento , Mortalidad Hospitalaria/tendencias , Humanos , Incidencia , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Factores de Riesgo , SARS-CoV-2 , Tasa de Supervivencia/tendencias , Estados Unidos/epidemiologíaRESUMEN
Extracorporeal membrane oxygenation (ECMO) is a form of life support for cardiac and/or pulmonary failure with unique ethical challenges compared to other forms of life support. Ethical challenges with ECMO exist when conventional standards of care apply, and are exacerbated during periods of absolute ECMO scarcity when "crisis standards of care" are instituted. When conventional standards of care apply, we propose that it is ethically permissible to withhold placing patients on ECMO for reasons of technical futility or when patients have terminal, short-term prognoses that are untreatable by ECMO. Under crisis standards of care, it is ethically permissible to broaden exclusionary criteria to also withhold ECMO from patients who have a low likelihood of recovery, to maximize the overall number of lives saved. Unilateral withdrawal of ECMO against a patient's preferences is unethical under conventional standards of care, but is ethical under crisis standards of care to increase access to ECMO to others in society. ECMO should only be rationed when true scarcity exists, and allocation protocols should be transparent to the public. When rationing must occur under crisis standards of care, it is imperative that oversight bodies assess for inequities in the allocation of ECMO and make frequent changes to improve any inequities.
Asunto(s)
Oxigenación por Membrana Extracorpórea , Humanos , Nivel de AtenciónRESUMEN
Under the new US heart allocation policy, transplant centers listed significantly more candidates at high priority statuses (Status 1 and 2) with mechanical circulatory support devices than expected. We determined whether the practice change was widespread or concentrated among certain transplant centers. Using data from the Scientific Registry of Transplant Recipients, we used mixed-effect logistic regression to compare the observed listings of adult, heart-alone transplant candidates post-policy (December 2018 to February 2020) to seasonally matched pre-policy cohort (December 2016 to February 2018). US transplant centers (N = 96) listed similar number of candidates in each policy period (4472 vs. 4498) but listed significantly more at high priority status (25.5% vs. 7.0%, p < .001) than expected. Adjusted for candidate characteristics, 91 of 96 (94.8%) centers listed significantly more candidates at high-priority status than expected, with the unexpected increase varying from 4.8% to 50.4% (interquartile range [IQR]: 14.0%-23.3%). Centers in OPOs with highest Status 1A transplant rate pre-policy were significantly more likely to utilize high-priority status under the new policy (OR: 9.73, p = .01). The new heart allocation policy was associated with widespread and significantly variable changes in transplant center practice that may undermine the effectiveness of the new system.
Asunto(s)
Trasplante de Corazón , Obtención de Tejidos y Órganos , Adulto , Humanos , Políticas , Receptores de Trasplantes , Listas de EsperaRESUMEN
OBJECTIVES: When healthcare systems are overwhelmed, accurate assessments of patients' predicted mortality risks are needed to ensure effective allocation of scarce resources. Organ dysfunction scores can serve this essential role, but their evaluation in this context has been limited so far. In this study, we sought to assess the performance of three organ dysfunction scores in both critically ill adults and children at clinically relevant mortality thresholds and timeframes for resource allocation and compare it with two published prioritization schemas. DESIGN: Retrospective observational cohort study. SETTING: Three large academic medical centers in the United States. PATIENTS: Critically ill adults and children. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We calculated the daily Sequential Organ Failure Assessment score in adults and the Pediatric Logistic Organ Dysfunction 2 score and the Pediatric Sequential Organ Failure Assessment score in children. There were 49,290 (11.6% mortality) and 19,983 children (2.5% mortality) included in the analysis. Both the Sequential Organ Failure Assessment and Pediatric Sequential Organ Failure Assessment scores had adequate discrimination across relevant timeframes and adequate distribution across relevant mortality thresholds. Additionally, we found that the only published state prioritization schema that includes pediatric and adult patients had poor alignment of mortality risks, giving adults a systematic advantage over children. CONCLUSIONS: In the largest analysis of organ dysfunction scores in a general population of critically ill adults and children to date, we found that both the Sequential Organ Failure Assessment and Pediatric Sequential Organ Failure Assessment scores had adequate performance across relevant mortality thresholds and timeframes for resource allocation. Published prioritization schemas that include both pediatric and adult patients may put children at a disadvantage. Furthermore, the distribution of patient and mortality risk in the published schemas may not adequately stratify patients for some high-stakes allocation decisions. This information may be useful to bioethicists, healthcare leaders, and policy makers who are developing resource allocation policies for critically ill patients.
Asunto(s)
Enfermedad Crítica/mortalidad , Insuficiencia Multiorgánica/mortalidad , Puntuaciones en la Disfunción de Órganos , Índice de Severidad de la Enfermedad , Adolescente , Adulto , Niño , Preescolar , Estudios de Cohortes , Enfermedad Crítica/terapia , Femenino , Mortalidad Hospitalaria , Humanos , Masculino , Persona de Mediana Edad , Insuficiencia Multiorgánica/terapia , Evaluación de Resultado en la Atención de Salud , Estudios Retrospectivos , Factores de Riesgo , Factores de TiempoRESUMEN
As the COVID-19 pandemic has unfolded across the United States, troubling disparities in mortality have emerged between different racial groups, particularly African Americans and Whites. Media reports, a growing body of COVID-19-related literature, and long-standing knowledge of structural racism and its myriad effects on the African American community provide important lenses for understanding and addressing these disparities.However, troubling gaps in knowledge remain, as does a need to act. Using the best available evidence, we present risk- and place-based recommendations for how to effectively address these disparities in the areas of data collection, COVID-19 exposure and testing, health systems collaboration, human capital repurposing, and scarce resource allocation.Our recommendations are supported by an analysis of relevant bioethical principles and public health practices. Additionally, we provide information on the efforts of Chicago, Illinois' mayoral Racial Equity Rapid Response Team to reduce these disparities in a major urban US setting.
Asunto(s)
Negro o Afroamericano/estadística & datos numéricos , COVID-19/terapia , Disparidades en el Estado de Salud , Disparidades en Atención de Salud/estadística & datos numéricos , COVID-19/etnología , Accesibilidad a los Servicios de Salud/estadística & datos numéricos , Humanos , Calidad de la Atención de Salud/estadística & datos numéricos , Racismo , Factores Socioeconómicos , Estados UnidosRESUMEN
Importance: In the United States, the number of deceased donor hearts available for transplant is limited. As a proxy for medical urgency, the US heart allocation system ranks heart transplant candidates largely according to the supportive therapy prescribed by transplant centers. Objective: To determine if there is a significant association between transplant center and survival benefit in the US heart allocation system. Design, Setting, and Participants: Observational study of 29â¯199 adult candidates for heart transplant listed on the national transplant registry from January 2006 through December 2015 with follow-up complete through August 2018. Exposures: Transplant center. Main Outcomes and Measures: The survival benefit associated with heart transplant as defined by the difference between survival after heart transplant and waiting list survival without transplant at 5 years. Each transplant center's mean survival benefit was estimated using a mixed-effects proportional hazards model with transplant as a time-dependent covariate, adjusted for year of transplant, donor quality, ischemic time, and candidate status. Results: Of 29â¯199 candidates (mean age, 52 years; 26% women) on the transplant waiting list at 113 centers, 19â¯815 (68%) underwent heart transplant. Among heart transplant recipients, 5389 (27%) died or underwent another transplant operation during the study period. Of the 9384 candidates who did not undergo heart transplant, 5669 (60%) died (2644 while on the waiting list and 3025 after being delisted). Estimated 5-year survival was 77% (interquartile range [IQR], 74% to 80%) among transplant recipients and 33% (IQR, 17% to 51%) among those who did not undergo heart transplant, which is a survival benefit of 44% (IQR, 27% to 59%). Survival benefit ranged from 30% to 55% across centers and 31 centers (27%) had significantly higher survival benefit than the mean and 30 centers (27%) had significantly lower survival benefit than the mean. Compared with low survival benefit centers, high survival benefit centers performed heart transplant for patients with lower estimated expected waiting list survival without transplant (29% at high survival benefit centers vs 39% at low survival benefit centers; survival difference, -10% [95% CI, -12% to -8.1%]), although the adjusted 5-year survival after transplant was not significantly different between high and low survival benefit centers (77.6% vs 77.1%, respectively; survival difference, 0.5% [95% CI, -1.3% to 2.3%]). Overall, for every 10% decrease in estimated transplant candidate waiting list survival at a given center, there was an increase of 6.2% (95% CI, 5.2% to 7.3%) in the 5-year survival benefit associated with heart transplant. Conclusions and Relevance: In this registry-based study of US heart transplant candidates, transplant center was associated with the survival benefit of transplant. Although the adjusted 5-year survival after transplant was not significantly different between high and low survival benefit centers, compared with centers with survival benefit significantly below the mean, centers with survival benefit significantly above the mean performed heart transplant for recipients who had significantly lower estimated expected 5-year waiting list survival without transplant.
Asunto(s)
Trasplante de Corazón/mortalidad , Evaluación de Resultado en la Atención de Salud , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Gravedad del Paciente , Calidad de la Atención de Salud , Sistema de Registros , Asignación de Recursos , Análisis de Supervivencia , Estados Unidos/epidemiología , Listas de EsperaRESUMEN
This Viewpoint discusses the unfairness of current CAR T-cell therapy allocation practices and offers alternative methods to more fairly allocate therapy.