RESUMO
Rationale: U.S. lung transplant mortality risk models do not account for patients' disease progression as time accrues between mandated clinical parameter updates. Objectives: To investigate the effects of accrued waitlist (WL) time on mortality in lung transplant candidates and recipients beyond those expressed by worsening clinical status and to present a new framework for conceptualizing mortality risk in end-stage lung disease. Methods: Using Scientific Registry of Transplant Recipients data (2015-2020, N = 12,616), we modeled transitions among multiple clinical states over time: WL, posttransplant, and death. Using cause-specific and ordinary Cox regression to estimate trajectories of composite 1-year mortality risk as a function of time from waitlisting to transplantation, we quantified the predictive accuracy of these estimates. We compared multistate model-derived candidate rankings against composite allocation score (CAS) rankings. Measurements and Main Results: There were 11.5% of candidates whose predicted 1-year mortality risk increased by >10% by day 30 on the WL. The multistate model ascribed lower numerical rankings (i.e., higher priority) than CAS for those who died while on the WL (multistate mean; median [interquartile range] ranking at death, 227; 154 [57-334]; CAS median [interquartile range] ranking at death, 329; 162 [11-668]). Patients with interstitial lung disease were more likely to have increasing risk trajectories as a function of time accrued on the WL compared with other lung diagnoses. Conclusions: Incorporating the effects of time accrued on the WL for lung transplant candidates and recipients in donor lung allocation systems may improve the survival of patients with end-stage lung diseases on the individual and population levels.
Assuntos
Transplante de Pulmão , Obtenção de Tecidos e Órgãos , Humanos , Listas de Espera , Doadores de TecidosRESUMO
The importance of waitlist (WL) mortality risk estimates will increase with the adoption of the US Composite Allocation Score (CAS) system. Calibration is rarely assessed in clinical prediction models, yet it is a key factor in determining access to lung transplant. We assessed the calibration of the WL-lung allocation score (LAS)/CAS models and developed alternative models to minimize miscalibration. Scientific Registry of Transplant Recipients data from 2015 to 2020 were used to assess the calibration of the WL model and for subgroups (age, sex, diagnosis, and race/ethnicity). Three recalibrated models were developed and compared: (1) simple recalibration model (SRM), (2) weighted recalibration model 1 (WRM1), and (3) weighted recalibration model 2 (WRM2). The current WL-LAS/CAS model underestimated risk for 78% of individuals (predicted mortality risk, <42%) and overpredicted risk for 22% of individuals (predicted mortality risk, ≥42%), with divergent results among subgroups. Error measures improved in SRM, WRM1, and WRM2. SRM generally preserved candidate rankings, whereas WRM1 and WRM2 led to changes in ranking by age and diagnosis. Differential miscalibration occurred in the WL-LAS/CAS model, which improved with recalibration measures. Further inquiry is needed to develop mortality models in which risk predictions approximate observed data to ensure accurate ranking and timely access to transplant. IMPACT: With changes to the lung transplant allocation system planned in 2023, evaluation of the accuracy and precision of survival models used to rank candidates for lung transplant is important. The waitlist model underpredicts risk for 78% of US transplant candidates with an unequal distribution of miscalibration across subgroups leading to inaccurate ranking of transplant candidates. This work will serve to inform future efforts to improve modeling efforts in the US lung transplant allocation system.
Assuntos
Transplante de Pulmão , Obtenção de Tecidos e Órgãos , Humanos , Listas de Espera , Transplantados , Etnicidade , PulmãoRESUMO
BACKGROUND: We describe and validate a new simulation framework addressing important limitations of the Simulated Allocation Models (SAMs) long used to project population effects of transplant policy changes. METHODS: We developed the Computational Open-source Model for Evaluating Transplantation (COMET), an agent-based model simulating interactions of individual donors and candidates over time to project population outcomes. COMET functionality is organized into interacting modules. Donors and candidates are synthetically generated using data-driven probability models which are adaptable to account for ongoing or hypothetical donor/candidate population trends and evolving disease management. To validate the first implementation of COMET, COMET-Lung, we attempted to reproduce lung transplant outcomes for U.S. adults from 2018-2019 and in the 6 months following adoption of the Composite Allocation Score (CAS) for lung transplant. RESULTS: Simulated (median [Interquartile Range, IQR]) vs observed outcomes for 2018-2019 were: 0.162 [0.157, 0.167] vs 0.170 waitlist deaths per waitlist year; 1.25 [1.23, 1.28] vs 1.26 transplants per waitlist year; 0.115 [0.112, 0.118] vs 0.113 post-transplant deaths per patient year; 202 [102, 377] vs 165 nautical miles travel distance. The model accurately predicted the observed precipitous decrease in transplants received by type O lung candidates in the six months following CAS implementation. CONCLUSIONS: COMET-Lung closely reproduced most observed outcomes. The use of synthetic populations in the COMET framework paves the way for examining possible transplant policy and clinical practice changes in populations reflecting realistic future states. Its flexible, modular nature can accelerate development of features to address specific research or policy questions across multiple organs.
Assuntos
Transplante de Pulmão , Obtenção de Tecidos e Órgãos , Listas de Espera , Humanos , Simulação por Computador , Estados Unidos , Masculino , Pessoa de Meia-Idade , Adulto , Doadores de Tecidos , FemininoRESUMO
BACKGROUND: The lung Composite Allocation Score (CAS) accounts separately for biological disadvantages stemming from candidate blood type and height using consensus-derived heuristics, which do not reflect the true supply of compatible organs available to candidates with specific combinations of blood type and height. Here, we develop an alternative CAS biological disadvantages subscore using a novel measure of donor supply. METHODS: Using Scientific Registry of Transplant Recipients data from February 19, 2015 to September 1, 2021, we modeled daily distance-adjusted supply of compatible donors, as a function of candidate blood type, height, and diagnosis group, using Poisson rate regression and applied the model to create a 10-point supply-based subscore. Substituting this subscore in place of the 10 total points allocated to blood type and height in CAS created a "Supply-Adjusted CAS". We simulated population outcomes under Supply-Adjusted CAS, original CAS (March 2023) and "ABO Modified" CAS (September 2023). RESULTS: The supply-based subscore was more responsive to variations in candidate blood type, height, and diagnosis group than corresponding CAS or ABO-Modified CAS subscores. In simulation, waitlist mortality improved from 13.95 per 100 waitlist years under CAS and 14.12 under ABO-Modified CAS to 13.09 under Supply-Adjusted CAS. Transplant rates improved from 121.6 and 126.2 under CAS and ABO-Modified CAS, respectively, to 128.8 under Supply-Adjusted CAS. Height disparities improved substantially, while blood type disparities grew slightly relative to ABO-Modified CAS. CONCLUSIONS: Supply-Adjusted CAS may improve lung transplant population outcomes overall while providing a more empirically based method to address equity.
RESUMO
Computer simulation has played a pivotal role in analyzing alternative organ allocation strategies in transplantation. The current approach to producing cohorts of organ donors and candidates for individual-level simulation requires directly re-sampling retrospective data from a transplant registry. This historical data may reflect outmoded policies and practices as well as systemic inequities in candidate listing, limiting contemporary applicability of simulation results. We describe the development of an alternative approach for generating synthetic donors and candidates using hierarchical Bayesian network probability models. We developed two Bayesian networks to model dependencies among 10 donor and 36 candidate characteristics relevant to waitlist survival, donor-candidate matching, and post-transplant survival. We estimated parameters for each model using Scientific Registry of Transplant Recipients (SRTR) data. For 100 donor and 100 candidate synthetic populations generated, proportions for each categorical donor or candidate attribute, respectively, fell within one percentage point of observed values; the interquartile ranges (IQRs) of each continuous variable contained the corresponding SRTR observed median. Comparisons of synthetic to observed stratified distributions demonstrated the ability of the method to capture complex joint variability among multiple characteristics. We also demonstrated how changing two upstream population parameters can exert cascading effects on multiple relevant clinical variables in a synthetic population. Generating synthetic donor and candidate populations in transplant simulation may help overcome critical limitations related to the re-sampling of historical data, allowing developers and decision makers to customize the parameters of these populations to reflect realistic or hypothetical future states.
Assuntos
Doadores de Tecidos , Obtenção de Tecidos e Órgãos , Humanos , Teorema de Bayes , Estudos Retrospectivos , Simulação por Computador , Sistema de Registros , Listas de EsperaRESUMO
Importance: Hypertension in middle-aged adults (35-50 years) is associated with poorer health outcomes in late life. Understanding how hypertension varies by race and ethnicity across levels of neighborhood disadvantage may allow for better characterization of persistent disparities. Objective: To evaluate spatial patterns of hypertension diagnosis and treatment by neighborhood socioeconomic position and racial and ethnic composition. Design, Setting, and Participants: In this cross-sectional study of middle-aged adults in Cuyahoga County, Ohio, who encountered primary care in 2019, geocoded electronic health record data were linked to the area deprivation index (ADI), a neighborhood disadvantage measure, at the US Census Block Group level (ie, neighborhood). Neighborhoods were stratified by ADI quintiles, with the highest quintile indicating the most disadvantage. Data were analyzed between August 7, 2023, and June 1, 2024. Exposure: Essential hypertension. Main Outcomes and Measures: The primary outcome was a clinician diagnosis of essential hypertension. Spatial analysis was used to characterize neighborhood-level patterns of hypertension prevalence and treatment. Interaction analysis was used to compare hypertension prevalence by racial and ethnic group within similar ADI quintiles. Results: A total of 56â¯387 adults (median [IQR] age, 43.1 [39.1-46.9] years; 59.8% female) across 1157 neighborhoods, which comprised 3.4% Asian, 31.1% Black, 5.5% Hispanic, and 60.0% White patients, were analyzed. A gradient of hypertension prevalence across ADI quintiles was observed, with the highest vs lowest ADI quintile neighborhoods having a higher hypertension rate (50.7% vs 25.5%) and a lower treatment rate (61.3% vs 64.5%). Of the 315 neighborhoods with predominantly Black (>75%) patient populations, 200 (63%) had a hypertension rate greater than 35% combined with a treatment rate of less than 70%; only 31 of 263 neighborhoods (11.8%) comprising 5% or less Black patient populations met this same criterion. Compared with a spatial model without covariates, inclusion of ADI and percentage of Black patients accounted for 91% of variation in hypertension diagnosis prevalence among men and 98% among women. Men had a higher prevalence of hypertension than women across race and ADI quintiles, but the association of ADI and hypertension risk was stronger in women. Sex prevalence differences were smallest between Black men and women, particularly in the highest ADI quintile (1689 [60.0%] and 2592 [56.0%], respectively). Conclusions and Relevance: These findings show an association between neighborhood deprivation and hypertension prevalence, with disparities observed particularly among Black patients, emphasizing a need for structural interventions to improve community health.
Assuntos
Disparidades nos Níveis de Saúde , Disparidades em Assistência à Saúde , Hipertensão , Características da Vizinhança , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Transversais , Etnicidade , Disparidades em Assistência à Saúde/estatística & dados numéricos , Disparidades em Assistência à Saúde/etnologia , Hipertensão/epidemiologia , Hipertensão/etnologia , Ohio/epidemiologia , Prevalência , Grupos RaciaisRESUMO
BACKGROUND: Predicting risk of waitlist mortality and subsequent classification of lung transplant candidates has been difficult due to inter-relatedness of risk factors, differential risk across populations, and changes in relationships over time. We developed a clinically intuitive indexing system to simplify mortality risk assessment. METHODS: Scientific Registry of Transplant Recipients data from February 19, 2015, to May 26, 2020 (n = 13,726) were used to estimate 3 constructs. Airway and oxygen function indices were estimated using confirmatory factor analysis and hierarchical clustering was used to derive respiratory support clusters. Cox proportional hazards regression was used to characterize event-free waitlist survival by constructs (3), age, sex, and diagnosis group. Model performance was compared to the Lung Allocation Score/Composite Allocation Score (LAS/CAS). RESULTS: Airway and oxygen function indices were created with substantive factor loadings forced expiratory volume (0.86), forced vital capacity (0.64), partial pressure of carbon dioxide (0.56) and PO2/fraction of inspired oxygen (0.83), partial pressure of oxygen (0.59), and mean pulmonary artery pressure (0.30), respectively. Four respiratory support clusters (C1: as needed O2, C2: continuous O2, C3: continuous O2/positive pressure ventilation (PPV), C4: PPV + extracorporeal membrane oxygenation) were identified. Constructs were used to identify patient profiles. Model area under the receiver operating characteristic curve was 0.85 [0.84, 0.87] compared to the LAS 0.92 [0.91, 0.94] at 4 weeks. Risk predictions were relatively insensitive to airway and oxygen function indices in C1 and C4 but varied across C2 and C3. CONCLUSIONS: Reducing the dimensionality of waitlist mortality risk offers an opportunity to identify clinical phenotypes that are more nuanced and thus more interpretable than current risk assessment provided by the LAS/CAS models.
Assuntos
Transplante de Pulmão , Humanos , Prognóstico , Estudos Retrospectivos , Transplante de Pulmão/métodos , Fatores de Risco , Oxigênio , Listas de EsperaRESUMO
BACKGROUND: As broader geographic sharing is implemented in lung transplant allocation through the Composite Allocation Score (CAS) system, models predicting waitlist and posttransplant (PT) survival will become more important in determining access to organs. RESEARCH QUESTION: How well do CAS survival models perform, and can discrimination performance be improved with alternative statistical models or machine learning approaches? STUDY DESIGN AND METHODS: Scientific Registry for Transplant Recipients (SRTR) data from 2015-2020 were used to build seven waitlist (WL) and data from 2010-2020 to build similar PT models. These included the (I) current lung allocation score (LAS)/CAS model; (II) re-estimated WL-LAS/CAS model; (III) model II incorporating nonlinear relationships; (IV) random survival forests model; (V) logistic model; (VI) linear discriminant analysis; and (VII) gradient-boosted tree model. Discrimination performance was evaluated at 1, 3, and 6 months on the waiting list and 1, 3, and 5 years PT. Area under the curve (AUC) values were estimated across subgroups. RESULTS: WL model performance was similar across models with the greatest discrimination in the baseline cohort (AUC 0.93) and declined to 0.87-0.89 for 3-month and 0.84-0.85 for 6-month predictions and further diminished for residual cohorts. Discrimination performance for PT models ranged from AUC 0.58-0.61 and remained stable with increasing forecasting times but was slightly worse for residual cohorts. WL and PT variability in AUC was greatest for individuals with Medicaid insurance. INTERPRETATION: Use of alternative modeling strategies and contemporary cohorts did not improve performance of models determining access to lung transplant.
Assuntos
Transplante de Pulmão , Obtenção de Tecidos e Órgãos , Estados Unidos , Humanos , Modelos Estatísticos , Listas de Espera , Pulmão , Estudos RetrospectivosRESUMO
Importance: A recent National Academies of Sciences, Engineering, and Medicine study found that transplant outcomes varied greatly based on multiple factors, including race, ethnicity, and geographic location. They proposed a number of recommendations including studying opportunities to improve equity in organ allocation. Objective: To evaluate the role of donor and recipient socioeconomic position and region as a mediator of observed racial and ethnic differences in posttransplant survival. Design, Setting, and Participants: This cohort study included lung transplant donors and recipients with race and ethnicity information and a zip code tabulation area-defined area deprivation index (ADI) from September 1, 2011, to September 1, 2021, whose data were in the US transplant registry. Data were analyzed from June to December 2022. Exposures: Race, neighborhood disadvantage, and region of donors and recipients. Main Outcomes and Measures: Univariable and multivariable Cox proportional hazards regression were used to study the association of donor and recipient race with ADI on posttransplant survival. Kaplan-Meier method estimation was performed by donor and recipient ADI. Generalized linear models by race were fit, and mediation analysis was performed. Bayesian conditional autoregressive Poisson rate models (1, state-level spatial random effects; 2, model 1 with fixed effects for race and ethnicity, 3; model 2 excluding region; and 4: model 1 with fixed effects for US region) were used to characterize variation in posttransplant mortality and compared using ratios of mortality rates to the national average. Results: Overall, 19â¯504 lung transplant donors (median [IQR] age, 33 [23-46] years; 3117 [16.0%] Hispanic individuals, 3667 [18.8%] non-Hispanic Black individuals, and 11â¯935 [61.2%] non-Hispanic White individuals) and recipients (median [IQR] age, 60 [51-66] years; 1716 [8.8%] Hispanic individuals, 1861 [9.5%] non-Hispanic Black individuals, and 15â¯375 [78.8%] non-Hispanic White individuals) were included. ADI did not mediate the difference in posttransplant survival between non-Hispanic Black and non-Hispanic White recipients; it mediated only 4.1% of the survival difference between non-Hispanic Black and Hispanic recipients. Spatial analysis revealed the increased risk of posttransplant death among non-Hispanic Black recipients may be associated with region of residence. Conclusions and Relevance: In this cohort study of lung transplant donors and recipients, socioeconomic position and region of residence did not explain most of the difference in posttransplant outcomes among racial and ethnic groups, which may be due to the highly selected nature of the pretransplant population. Further research should evaluate other potentially mediating effects contributing to inequity in posttransplant survival.