RESUMO
BACKGROUND: Gastrointestinal bleeding (GIB) is a clinical challenge in kidney failure. INSPIRE group assessed if machine learning could determine a hemodialysis (HD) patient's 180-day GIB hospitalization risk. METHODS: An eXtreme Gradient Boosting (XGBoost) and logistic regression model were developed using an HD dataset in United States (2017-2020). Patient data was randomly split (50% training, 30% validation, and 20% testing). HD treatments ≤ 180 days before GIB hospitalization were classified as positive observations; others were negative. Models considered 1,303 exposures/covariates. Performance was measured using unseen testing data. RESULTS: Incidence of 180-day GIB hospitalization was 1.18% in HD population (n = 451,579), and 1.12% in testing dataset (n = 38,853). XGBoost showed area under the receiver operating curve (AUROC) = 0.74 (95% confidence interval (CI) 0.72, 0.76) versus logistic regression showed AUROC = 0.68 (95% CI 0.66, 0.71). Sensitivity and specificity were 65.3% (60.9, 69.7) and 68.0% (67.6, 68.5) for XGBoost versus 68.9% (64.7, 73.0) and 57.0% (56.5, 57.5) for logistic regression, respectively. Associations in exposures were consistent for many factors. Both models showed GIB hospitalization risk was associated with older age, disturbances in anemia/iron indices, recent all-cause hospitalizations, and bone mineral metabolism markers. XGBoost showed high importance on outcome prediction for serum 25 hydroxy (25OH) vitamin D levels, while logistic regression showed high importance for parathyroid hormone (PTH) levels. CONCLUSIONS: Machine learning can be considered for early detection of GIB event risk in HD. XGBoost outperforms logistic regression, yet both appear suitable. External and prospective validation of these models is needed. Association between bone mineral metabolism markers and GIB events was unexpected and warrants investigation. TRIAL REGISTRATION: This retrospective analysis of real-world data was not a prospective clinical trial and registration is not applicable.
Assuntos
Hemorragia Gastrointestinal , Hospitalização , Aprendizado de Máquina , Diálise Renal , Humanos , Diálise Renal/efeitos adversos , Hemorragia Gastrointestinal/epidemiologia , Hemorragia Gastrointestinal/etiologia , Hemorragia Gastrointestinal/sangue , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Medição de Risco/métodos , Falência Renal Crônica/terapia , Falência Renal Crônica/epidemiologia , Falência Renal Crônica/sangue , Modelos Logísticos , Fatores de Risco , IncidênciaRESUMO
INTRODUCTION: The management of anemia in chronic kidney disease (CKD-An) presents significant challenges for nephrologists due to variable responsiveness to erythropoietin-stimulating agents (ESAs), hemoglobin (Hb) cycling, and multiple clinical factors affecting erythropoiesis. The Anemia Control Model (ACM) is a decision support system designed to personalize anemia treatment, which has shown improvements in achieving Hb targets, reducing ESA doses, and maintaining Hb stability. This study aimed to evaluate the association between ACM-guided anemia management with hospitalizations and survival in a large cohort of hemodialysis patients. METHODS: This multi-center, retrospective cohort study evaluated adult hemodialysis patients within the European Fresenius Medical Care NephroCare network from 2014 to 2019. Patients treated according to ACM recommendations were compared to those from centers without ACM. Data on demographics, comorbidities, and dialysis treatment were used to compute a propensity score estimating the likelihood of receiving ACM-guided care. The primary endpoint was hospitalizations during follow-up; the secondary endpoint was survival. A 1:1 propensity score-matched design was used to minimize confounding bias. RESULTS: A total of 20,209 eligible patients were considered (reference group: 17,101; ACM adherent group: 3108). Before matching, the mean age was 65.3 ± 14.5 years, with 59.2% men. Propensity score matching resulted in two groups of 1950 patients each. Matched ACM adherent and non-ACM patients showed negligible differences in baseline characteristics. Hospitalization rates were lower in the ACM group both before matching (71.3 vs. 82.6 per 100 person-years, p < 0.001) and after matching (74.3 vs. 86.7 per 100 person-years, p < 0.001). During follow-up, 385 patients died, showing no significant survival benefit for ACM-guided care (hazard ratio = 0.93; p = 0.51). CONCLUSIONS: ACM-guided anemia management was associated with a significant reduction in hospitalization risk among hemodialysis patients. These results further support the utility of ACM as a decision-support tool enhancing anemia management in clinical practice.
RESUMO
COVID-19 has a higher rate of morbidity and mortality among dialysis patients than the general population. Identifying infected patients early with the support of predictive models helps dialysis centers implement concerted procedures (e.g., temperature screenings, universal masking, isolation treatments) to control the spread of SARS-CoV-2 and mitigate outbreaks. We collect data from multiple sources, including demographics, clinical, treatment, laboratory, vaccination, socioeconomic status, and COVID-19 surveillance. Previous early prediction models, such as logistic regression, SVM, and XGBoost, require sophisticated feature engineering and need improved prediction performance. We create deep learning models, including Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN), to predict SARS-CoV-2 infections during incubation. Our study shows deep learning models with minimal feature engineering can identify those infected patients more accurately than previously built models. Our Long Short-Term Memory (LSTM) model consistently performed well, with an AUC exceeding 0.80, peaking at 0.91 in August 2021. The CNN model also demonstrated strong results with an AUC above 0.75. Both models outperformed previous best XGBoost models by over 0.10 in AUC. Prediction accuracy declined as the pandemic evolved, dropping to approximately 0.75 between September 2021 and January 2022. Maintaining a 20% false positive rate, our LSTM and CNN models identified 66% and 64% of positive cases among patients, significantly outperforming XGBoost models at 42%. We also identify key features for dialysis patients by calculating the gradient of the output with respect to the input features. By closely monitoring these factors, dialysis patients can receive earlier diagnoses and care, leading to less severe outcomes. Our research highlights the effectiveness of deep neural networks in analyzing longitudinal data, especially in predicting COVID-19 infections during the crucial incubation period. These deep network approaches surpass traditional methods relying on aggregated variable means, significantly improving the accurate identification of SARS-CoV-2 infections.
Assuntos
COVID-19 , Aprendizado Profundo , Redes Neurais de Computação , Diálise Renal , SARS-CoV-2 , Humanos , COVID-19/epidemiologia , COVID-19/diagnóstico , SARS-CoV-2/isolamento & purificação , Masculino , Feminino , Pessoa de Meia-Idade , IdosoRESUMO
Some patients with COVID-19 show changes in signs and symptoms such as temperature and oxygen saturation days before being positively tested for SARS-CoV-2, while others remain asymptomatic. It is important to identify these subgroups and to understand what biological and clinical predictors are related to these subgroups. This information will provide insights into how the immune system may respond differently to infection and can further be used to identify infected individuals. We propose a flexible nonparametric mixed-effects mixture model that identifies risk factors and classifies patients with biological changes. We model the latent probability of biological changes using a logistic regression model and trajectories in the latent groups using smoothing splines. We developed an EM algorithm to maximize the penalized likelihood for estimating all parameters and mean functions. We evaluate our methods by simulations and apply the proposed model to investigate changes in temperature in a cohort of COVID-19-infected hemodialysis patients.
RESUMO
Importance: The consequences of low levels of environmental lead exposure, as found commonly in US household water, have not been established. Objective: To examine whether commonly encountered levels of lead in household water are associated with hematologic toxicity among individuals with advanced kidney disease, a group known to have disproportionate susceptibility to environmental toxicants. Design, Setting, and Participants: Cross-sectional analysis of household water lead concentrations and hematologic outcomes was performed among patients beginning dialysis at a Fresenius Medical Care outpatient facility between January 1, 2017, and December 20, 2021. Data analysis was performed from April 1 to August 15, 2023. Exposure: Concentrations of lead in household water were examined in categorical proportions of the Environmental Protection Agency's allowable threshold (15 µg/L) and continuously. Main Outcomes and Measures: Hematologic toxic effects were defined by monthly erythropoiesis-stimulating agent (ESA) dosing during the first 90 days of incident kidney failure care and examined as 3 primary outcomes: a proportion receiving maximum or higher dosing, continuously, and by a resistance index that normalized to body weight and hemoglobin concentrations. Secondarily, hemoglobin concentrations for patients with data prior to kidney failure onset were examined, overall and among those with concurrent iron deficiency, thought to increase gastrointestinal absorption of ingested lead. Results: Among 6404 patients with incident kidney failure (male, 4182 [65%]; mean [SD] age, 57 [14] years) followed up for the first 90 days of dialysis therapy, 12% (n = 742) had measurable lead in household drinking water. A higher category of household lead contamination was associated with 15% (odds ratio [OR], 1.15 [95% CI, 1.04-1.27]) higher risk of maximum monthly ESA dosing, 4.5 (95% CI, 0.8-8.2) µg higher monthly ESA dose, and a 0.48% (95% CI, 0.002%-0.96%) higher monthly resistance index. Among patients with pre-kidney failure hemoglobin measures (n = 2648), a higher household lead categorization was associated with a 0.12 (95% CI, -0.23 to -0.002) g/dL lower hemoglobin concentration, particularly among those with concurrent iron deficiency (multiplicative interaction, P = .07), among whom hemoglobin concentrations were 0.25 (95% CI, -0.47 to -0.04) g/dL lower. Conclusion: The findings of this study suggest that levels of lead found commonly in US drinking water may be associated with lead poisoning among susceptible individuals.
Assuntos
Chumbo , Insuficiência Renal Crônica , Humanos , Masculino , Feminino , Estudos Transversais , Pessoa de Meia-Idade , Chumbo/sangue , Insuficiência Renal Crônica/epidemiologia , Idoso , Exposição Ambiental/efeitos adversos , Hematínicos/administração & dosagem , Hematínicos/efeitos adversos , Poluentes Químicos da Água/análise , Poluentes Químicos da Água/efeitos adversos , Diálise RenalRESUMO
INTRODUCTION: The rapid advancement of artificial intelligence and big data analytics, including descriptive, diagnostic, predictive, and prescriptive analytics, has the potential to revolutionize many areas of medicine, including nephrology and dialysis. Artificial intelligence and big data analytics can be used to analyze large amounts of patient medical records, including laboratory results and imaging studies, to improve the accuracy of diagnosis, enhance early detection, identify patterns and trends, and personalize treatment plans for patients with kidney disease. Additionally, artificial intelligence and big data analytics can be used to identify patients' treatment who are not receiving adequate care, highlighting care inefficiencies in the dialysis provider, optimizing patient outcomes, reducing healthcare costs, and consequently creating values for all the involved stakeholders. OBJECTIVES: We present the results of a comprehensive survey aimed at exploring the attitudes of European physicians from eight countries working within a major hemodialysis network (Fresenius Medical Care NephroCare) toward the application of artificial intelligence in clinical practice. METHODS: An electronic survey on the implementation of artificial intelligence in hemodialysis clinics was distributed to 1,067 physicians. Of the 1,067 individuals invited to participate in the study, 404 (37.9%) professionals agreed to participate in the survey. RESULTS: The survey showed that a substantial proportion of respondents believe that artificial intelligence has the potential to support physicians in reducing medical malpractice or mistakes. CONCLUSION: While artificial intelligence's potential benefits are recognized in reducing medical errors and improving decision-making, concerns about treatment plan consistency, personalization, privacy, and the human aspects of patient care persist. Addressing these concerns will be crucial for successfully integrating artificial intelligence solutions in nephrology practice.
Assuntos
Inteligência Artificial , Nefrologia , Humanos , Nefrologistas , Diálise Renal , Inquéritos e QuestionáriosRESUMO
Background: The coronavirus disease 2019 (COVID-19) pandemic has created more devastation among dialysis patients than among the general population. Patient-level prediction models for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection are crucial for the early identification of patients to prevent and mitigate outbreaks within dialysis clinics. As the COVID-19 pandemic evolves, it is unclear whether or not previously built prediction models are still sufficiently effective. Methods: We developed a machine learning (XGBoost) model to predict during the incubation period a SARS-CoV-2 infection that is subsequently diagnosed after 3 or more days. We used data from multiple sources, including demographic, clinical, treatment, laboratory, and vaccination information from a national network of hemodialysis clinics, socioeconomic information from the Census Bureau, and county-level COVID-19 infection and mortality information from state and local health agencies. We created prediction models and evaluated their performances on a rolling basis to investigate the evolution of prediction power and risk factors. Result: From April 2020 to August 2020, our machine learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.75, an improvement of over 0.07 from a previously developed machine learning model published by Kidney360 in 2021. As the pandemic evolved, the prediction performance deteriorated and fluctuated more, with the lowest AUROC of 0.6 in December 2021 and January 2022. Over the whole study period, that is, from April 2020 to February 2022, fixing the false-positive rate at 20%, our model was able to detect 40% of the positive patients. We found that features derived from local infection information reported by the Centers for Disease Control and Prevention (CDC) were the most important predictors, and vaccination status was a useful predictor as well. Whether or not a patient lives in a nursing home was an effective predictor before vaccination, but became less predictive after vaccination. Conclusion: As found in our study, the dynamics of the prediction model are frequently changing as the pandemic evolves. County-level infection information and vaccination information are crucial for the success of early COVID-19 prediction models. Our results show that the proposed model can effectively identify SARS-CoV-2 infections during the incubation period. Prospective studies are warranted to explore the application of such prediction models in daily clinical practice.
RESUMO
BACKGROUND: Intradialytic hypotension remains one of the most recurrent complications of dialysis sessions. Inadequate management can lead to adverse outcomes, highlighting the need to develop personalized approaches for the prevention of intradialytic hypotension. Here, we sought to develop and validate two AI-based risk models predicting the occurrence of symptomatic intradialytic hypotension at different time points. METHODS: The models were built using the XGBoost algorithm and they predict the occurrence of intradialytic hypotension in the next dialysis session and in the next month. The initial dataset, obtained from routinely collected data in the EuCliD® Database, was split to perform model derivation, training and validation. Model performance was evaluated by concordance statistic and calibration charts; the importance of features was assessed with the Shapley Additive Explanation (SHAP) methodology. RESULTS: The final dataset included 1,249,813 dialysis sessions, and the incidence rate of intradialytic hypotension was 10.07% (95% CI 10.02-10.13). Our models retained good discrimination (AUC around 0.8) and a suitable calibration yielding to the selection of three classification thresholds identifying four distinct risk groups. Variables providing the most significant impact on risk estimates were blood pressure dynamics and other metrics mirroring hemodynamic instability over time. CONCLUSIONS: Recurrent symptomatic intradialytic hypotension could be reliably and accurately predicted using routinely collected data during dialysis treatment and standard clinical care. Clinical application of these prediction models would allow for personalized risk-based interventions for preventing and managing intradialytic hypotension.
Assuntos
Hipotensão , Falência Renal Crônica , Humanos , Triagem , Hipotensão/diagnóstico , Hipotensão/etiologia , Hipotensão/prevenção & controle , Pressão Sanguínea , Diálise Renal/efeitos adversos , Diálise Renal/métodos , Inteligência Artificial , Falência Renal Crônica/terapiaRESUMO
BACKGROUND: In maintenance hemodialysis patients, intradialytic hypotension (IDH) is a frequent complication that has been associated with poor clinical outcomes. Prediction of IDH may facilitate timely interventions and eventually reduce IDH rates. METHODS: We developed a machine learning model to predict IDH in in-center hemodialysis patients 15-75 min in advance. IDH was defined as systolic blood pressure (SBP) <90 mmHg. Demographic, clinical, treatment-related and laboratory data were retrieved from electronic health records and merged with intradialytic machine data that were sent in real-time to the cloud. For model development, dialysis sessions were randomly split into training (80%) and testing (20%) sets. The area under the receiver operating characteristic curve (AUROC) was used as a measure of the model's predictive performance. RESULTS: We utilized data from 693 patients who contributed 42 656 hemodialysis sessions and 355 693 intradialytic SBP measurements. IDH occurred in 16.2% of hemodialysis treatments. Our model predicted IDH 15-75 min in advance with an AUROC of 0.89. Top IDH predictors were the most recent intradialytic SBP and IDH rate, as well as mean nadir SBP of the previous 10 dialysis sessions. CONCLUSIONS: Real-time prediction of IDH during an ongoing hemodialysis session is feasible and has a clinically actionable predictive performance. If and to what degree this predictive information facilitates the timely deployment of preventive interventions and translates into lower IDH rates and improved patient outcomes warrants prospective studies.
Assuntos
Hipotensão , Falência Renal Crônica , Humanos , Falência Renal Crônica/terapia , Falência Renal Crônica/complicações , Estudos Prospectivos , Computação em Nuvem , Hipotensão/diagnóstico , Hipotensão/etiologia , Diálise Renal/efeitos adversos , Pressão SanguíneaRESUMO
A case study explores patterns of kidney function decline using unsupervised learning methods first and then associating patterns with clinical outcomes using supervised learning methods. Predicting short-term risk of hospitalization and death prior to renal dialysis initiation may help target high-risk patients for more aggressive management. This study combined clinical data from patients presenting for renal dialysis at Fresenius Medical Care with laboratory data from Quest Diagnostics to identify disease trajectory patterns associated with the 90-day risk of hospitalization and death after beginning renal dialysis. Patients were clustered into 4 groups with varying rates of estimated glomerular filtration rate (eGFR) decline during the 2-year period prior to dialysis. Overall rates of hospitalization and death were 24.9% (582/2341) and 4.6% (108/2341), respectively. Groups with the steepest declines had the highest rates of hospitalization and death within 90 days of dialysis initiation. The rate of eGFR decline is a valuable and readily available tool to stratify short-term (90 days) risk of hospitalization and death after the initiation of renal dialysis. More intense approaches are needed that apply models that identify high risks to potentially avert or reduce short-term hospitalization and death of patients with a severe and rapidly progressive chronic kidney disease.
Assuntos
Diálise Renal , Insuficiência Renal Crônica , Humanos , Diálise Renal/efeitos adversos , Insuficiência Renal Crônica/diagnóstico , Taxa de Filtração Glomerular , Hospitalização , RimRESUMO
INTRODUCTION: Inadequate predialysis care and education impacts the selection of a dialysis modality and is associated with adverse clinical outcomes. Transitional care units (TCUs) aim to meet the unmet educational needs of incident dialysis patients, but their impact beyond increasing home dialysis utilization has been incompletely characterized. METHODS: This retrospective study included adults initiating in-center hemodialysis at a TCU, matched to controls (1:4) with no TCU history initiating in-center hemodialysis. Patients were followed for up to 14 months. TCUs are dedicated spaces where staff provide personalized education and as-needed adjustments to dialysis prescriptions. For many patients, therapy was initiated with four to five weekly dialysis sessions, with at least some sessions delivered by home dialysis machines. Outcomes included survival, first hospitalization, transplant waiting-list status, post-TCU dialysis modality, and vascular access type. FINDINGS: The study included 724 patients initiating dialysis across 48 TCUs, with 2892 well-matched controls. At the end of 14 months, patients initiating dialysis in a TCU were significantly more likely to be referred and/or wait-listed for a kidney transplant than controls (57% vs. 42%; p < 0.0001). Initiation of dialysis at a TCU was also associated with significantly lower rates of receiving in-center hemodialysis at 14 months (74% vs. 90%; p < 0.0001) and higher rates of arteriovenous access (70% vs. 63%; p = 0.003). Although not statistically significant, TCU patients were more likely to survive and less likely to be hospitalized during follow-up than controls. DISCUSSION: Although TCUs are sometimes viewed as only a means for enhancing utilization of home dialysis, patients attending TCUs exhibited more favorable outcomes across all endpoints. In addition to being 2.5-fold more likely to receive home dialysis, TCU patients were 42% more likely to be referred for transplantation. Our results support expanding utilization of TCUs for patients with inadequate predialysis support.
Assuntos
Falência Renal Crônica , Cuidado Transicional , Adulto , Humanos , Diálise Renal/métodos , Pontuação de Propensão , Estudos Retrospectivos , Hemodiálise no Domicílio , Falência Renal Crônica/terapiaRESUMO
Introduction: Inflammation is highly prevalent among patients with end-stage kidney disease and is associated with adverse outcomes. We aimed to investigate longitudinal changes in inflammatory markers in a diverse international incident hemodialysis patient population. Methods: The MONitoring Dialysis Outcomes (MONDO) Consortium encompasses hemodialysis databases from 31 countries in Europe, North America, South America, and Asia. The MONDO database was queried for inflammatory markers (total white blood cell count [WBC], neutrophil count, lymphocyte count, serum albumin, and C-reactive protein [CRP]) and hemoglobin levels in incident hemodialysis patients. Laboratory parameters were measured every month. Patients were stratified by survival time (≤6 months, >6 to 12 months, >12 to 18 months, >18 to 24 months, >24 to 30 months, >30 to 36 months, and >36 months) following dialysis initiation. We used cubic B-spline basis function to evaluate temporal changes in inflammatory parameters in relationship with patient survival. Results: We studied 18,726 incident hemodialysis patients. Their age at dialysis initiation was 71.3 ± 11.9 years; 10,802 (58%) were males. Within the first 6 months, 2068 (11%) patients died, and 12,295 patients (67%) survived >36 months (survivor cohort). Hemodialysis patients who died showed a distinct biphasic pattern of change in inflammatory markers where an initial decline of inflammation was followed by a rapid rise that was consistently evident approximately 6 months before death. This pattern was similar in all patients who died and was consistent across the survival time intervals. In contrast, in the survivor cohort, we observed initial decline of inflammation followed by sustained low levels of inflammatory biomarkers. Conclusion: Our international study of incident hemodialysis patients highlights a temporal relationship between serial measurements of inflammatory markers and patient survival. This finding may inform the development of prognostic models, such as the integration of dynamic changes in inflammatory markers for individual risk profiling and guiding preventive and therapeutic interventions.
RESUMO
INTRODUCTION: Several factors affect the survival of End Stage Kidney Disease (ESKD) patients on dialysis. Machine learning (ML) models may help tackle multivariable and complex, often non-linear predictors of adverse clinical events in ESKD patients. In this study, we used advanced ML method as well as a traditional statistical method to develop and compare the risk factors for mortality prediction model in hemodialysis (HD) patients. MATERIALS AND METHODS: We included data HD patients who had data across a baseline period of at least 1 year and 1 day in the internationally representative Monitoring Dialysis Outcomes (MONDO) Initiative dataset. Twenty-three input parameters considered in the model were chosen in an a priori manner. The prediction model used 1 year baseline data to predict death in the following 3 years. The dataset was randomly split into 80% training data and 20% testing data for model development. Two different modeling techniques were used to build the mortality prediction model. FINDINGS: A total of 95,142 patients were included in the analysis sample. The area under the receiver operating curve (AUROC) of the model on the test data with XGBoost ML model was 0.84 on the training data and 0.80 on the test data. AUROC of the logistic regression model was 0.73 on training data and 0.75 on test data. Four out of the top five predictors were common to both modeling strategies. DISCUSSION: In the internationally representative MONDO data for HD patients, we describe the development of a ML model and a traditional statistical model that was suitable for classification of a prevalent HD patient's 3-year risk of death. While both models had a reasonably high AUROC, the ML model was able to identify levels of hematocrit (HCT) as an important risk factor in mortality. If implemented in clinical practice, such proof-of-concept models could be used to provide pre-emptive care for HD patients.
Assuntos
Falência Renal Crônica , Diálise Renal , Humanos , Falência Renal Crônica/terapia , Fatores de RiscoRESUMO
BACKGROUND: We developed machine learning models to understand the predictors of shorter-, intermediate-, and longer-term mortality among hemodialysis (HD) patients affected by COVID-19 in four countries in the Americas. METHODS: We used data from adult HD patients treated at regional institutions of a global provider in Latin America (LatAm) and North America who contracted COVID-19 in 2020 before SARS-CoV-2 vaccines were available. Using 93 commonly captured variables, we developed machine learning models that predicted the likelihood of death overall, as well as during 0-14, 15-30, > 30 days after COVID-19 presentation and identified the importance of predictors. XGBoost models were built in parallel using the same programming with a 60%:20%:20% random split for training, validation, & testing data for the datasets from LatAm (Argentina, Columbia, Ecuador) and North America (United States) countries. RESULTS: Among HD patients with COVID-19, 28.8% (1,001/3,473) died in LatAm and 20.5% (4,426/21,624) died in North America. Mortality occurred earlier in LatAm versus North America; 15.0% and 7.3% of patients died within 0-14 days, 7.9% and 4.6% of patients died within 15-30 days, and 5.9% and 8.6% of patients died > 30 days after COVID-19 presentation, respectively. Area under curve ranged from 0.73 to 0.83 across prediction models in both regions. Top predictors of death after COVID-19 consistently included older age, longer vintage, markers of poor nutrition and more inflammation in both regions at all timepoints. Unique patient attributes (higher BMI, male sex) were top predictors of mortality during 0-14 and 15-30 days after COVID-19, yet not mortality > 30 days after presentation. CONCLUSIONS: Findings showed distinct profiles of mortality in COVID-19 in LatAm and North America throughout 2020. Mortality rate was higher within 0-14 and 15-30 days after COVID-19 in LatAm, while mortality rate was higher in North America > 30 days after presentation. Nonetheless, a remarkable proportion of HD patients died > 30 days after COVID-19 presentation in both regions. We were able to develop a series of suitable prognostic prediction models and establish the top predictors of death in COVID-19 during shorter-, intermediate-, and longer-term follow up periods.
Assuntos
COVID-19 , Adulto , Humanos , Masculino , Vacinas contra COVID-19 , Aprendizado de Máquina , América do Norte/epidemiologia , Diálise Renal , SARS-CoV-2 , FemininoRESUMO
BACKGROUND: We tested if fatigue in incident Peritoneal Dialysis associated with an increased risk for mortality, independently from main confounders. METHODS: We conducted a side-by-side study from two of incident PD patients in Brazil and the United States. We used the same code to independently analyze data in both countries during 2004 to 2011. We included data from adults who completed KDQOL-SF vitality subscale within 90 days after starting PD. Vitality score was categorized in four groups: >50 (high vitality), ≥40 to ≤50 (moderate vitality), >35 to <40 (moderate fatigue), ≤35 (high fatigue; reference group). In each country's cohort, we built four distinct models to estimate the associations between vitality (exposure) and all-cause mortality (outcome): (i) Cox regression model; (ii) competing risk model accounting for technique failure events; (iii) multilevel survival model of clinic-level clusters; (iv) multivariate regression model with smoothing splines treating vitality as a continuous measure. Analyses were adjusted for age, comorbidities, PD modality, hemoglobin, and albumin. A mixed-effects meta-analysis was used to pool hazard ratios (HRs) from both cohorts to model mortality risk for each 10-unit increase in vitality. RESULTS: We used data from 4,285 PD patients (Brazil n = 1,388 and United States n = 2,897). Model estimates showed lower vitality levels within 90 days of starting PD were associated with a higher risk of mortality, which was consistent in Brazil and the United States cohorts. In the multivariate survival model, each 10-unit increase in vitality score was associated with lower risk of all-cause mortality in both cohorts (Brazil HR = 0.79 [95%CI 0.70 to 0.90] and United States HR = 0.90 [95%CI 0.88 to 0.93], pooled HR = 0.86 [95%CI 0.75 to 0.98]). Results for all models provided consistent effect estimates. CONCLUSIONS: Among patients in Brazil and the United States, lower vitality score in the initial months of PD was independently associated with all-cause mortality.
Assuntos
Falência Renal Crônica , Diálise Peritoneal , Adulto , Brasil/epidemiologia , Fadiga/etiologia , Humanos , Falência Renal Crônica/terapia , Diálise Peritoneal/efeitos adversos , Modelos de Riscos Proporcionais , Estudos Retrospectivos , Fatores de Risco , Estados Unidos/epidemiologiaRESUMO
BACKGROUND: We evaluated restenosis rates at the cephalic arch after percutaneous angioplasty and stenting procedures in patients with brachial artery to cephalic vein arteriovenous fistula (BCAVF) hemodialysis access. METHODS: We used data from adult hemodialysis patients treated at a national network of 44 outpatient interventional facilities during Oct 2011-2015. We included data from patients with BCAVF who received an exclusive angioplasty, or stent with angioplasty, for treatment of cephalic arch stenosis and had ≥1 subsequent evaluation of the cephalic arch. Median percent restenosis per month at cephalic arch and days between encounters was calculated from the 1st index to 2nd procedure, and for up to 4 subsequent encounters. Analyses were stratified by intervention and device types. RESULTS: We identified a cohort of 3301 patients (mean age 62.2 ± 13.9 years, 58.5% male, 33.2% white race) with a BCAVF who had an angioplasty, or stent, at the cephalic arch for an index and ≥ 1 follow-up procedure. Between the 1st index to 2nd procedure, patients who received an angioplasty (n = 2663) or stent (n = 933) showed a median decrease of 18.9 and 16.5% in luminal diameter per month and a median time of 93 and 91 days between encounters, respectively. Restenosis and day rates were similar for standard versus high-pressure angioplasties. Bare metal stents showed 10.1 percentage point higher restenosis rate compared to stent grafts. Restenosis rates and time to restenosis were relatively consistent across subsequent encounters. CONCLUSIONS: Findings suggest hemodialysis patients with a BCAVF who require an angioplasty or stent to treat a stenosis at the cephalic arch will have stenosis reformed at a rate of 18.9 and 16.5% per month after the first intervention, respectively. Findings suggest patients are at risk of having significant lesions at the cephalic arch within 3 months after the previous intervention.
Assuntos
Derivação Arteriovenosa Cirúrgica , Fístula , Adulto , Idoso , Derivação Arteriovenosa Cirúrgica/efeitos adversos , Constrição Patológica/etiologia , Constrição Patológica/cirurgia , Feminino , Fístula/etiologia , Oclusão de Enxerto Vascular/epidemiologia , Oclusão de Enxerto Vascular/etiologia , Humanos , Masculino , Pessoa de Meia-Idade , Diálise Renal , Estudos Retrospectivos , Resultado do Tratamento , Grau de Desobstrução VascularRESUMO
Introduction: Patients with end-stage kidney disease face a higher risk of severe outcomes from SARS-CoV-2 infection. Moreover, it is not well known to what extent potentially modifiable risk factors contribute to mortality risk. In this historical cohort study, we investigated the incidence and risk factors for 30-day mortality among hemodialysis patients with SARS-CoV-2 infection treated in the European Fresenius Medical Care NephroCare network using conventional and machine learning techniques. Methods: We included adult hemodialysis patients with the first documented SARS-CoV-2 infection between February 1, 2020, and March 31, 2021, registered in the clinical database. The index date for the analysis was the first SARS-CoV-2 suspicion date. Patients were followed for up to 30 days until April 30, 2021. Demographics, comorbidities, and various modifiable risk factors, expressed as continuous parameters and as key performance indicators (KPIs), were considered to tap multiple dimensions including hemodynamic control, nutritional state, and mineral metabolism in the 6 months before the index date. We used logistic regression (LR) and XGBoost models to assess risk factors for 30-day mortality. Results: We included 9,211 patients (age 65.4 ± 13.7 years, dialysis vintage 4.2 ± 3.7 years) eligible for the study. The 30-day mortality rate was 20.8%. In LR models, several potentially modifiable factors were associated with higher mortality: body mass index (BMI) 30-40 kg/m2 (OR: 1.28, CI: 1.10-1.50), single-pool Kt/V (OR off-target vs on-target: 1.19, CI: 1.02-1.38), overhydration (OR: 1.15, CI: 1.01-1.32), and both low (<2.5 mg/dl) and high (≥5.5 mg/dl) serum phosphate levels (OR: 1.52, CI: 1.07-2.16 and OR: 1.17, CI: 1.01-1.35). On-line hemodiafiltration was protective in the model using KPIs (OR: 0.86, CI: 0.76-0.97). SHapley Additive exPlanations analysis in XGBoost models shows a high influence on prediction for several modifiable factors as well, including inflammatory parameters, high BMI, and fluid overload. In both LR and XGBoost models, age, gender, and comorbidities were strongly associated with mortality. Conclusion: Both conventional and machine learning techniques showed that KPIs and modifiable risk factors in different dimensions ascertained 6 months before the COVID-19 suspicion date were associated with 30-day COVID-19-related mortality. Our results suggest that adequate dialysis and achieving KPI targets remain of major importance during the COVID-19 pandemic as well.
RESUMO
Background: Hemodialysis patients have high-risk of severe SARS-CoV-2 infection but were unrepresented in randomized controlled trials evaluating the safety and efficacy of COVID-19 vaccines. We estimated the real-world effectiveness of COVID-19 vaccines in a large international cohort of hemodialysis patients. Methods: In this historical, 1:1 matched cohort study, we included adult hemodialysis patients receiving treatment from December 1, 2020, to May 31, 2021. For each vaccinated patient, an unvaccinated control was selected among patients registered in the same country and attending a dialysis session around the first vaccination date. Matching was based on demographics, clinical characteristics, past COVID-19 infections and a risk score representing the local background risk of infection at vaccination dates. We estimated the effectiveness of mRNA and viral-carrier COVID-19 vaccines in preventing infection and mortality rates from a time-dependent Cox regression stratified by country. Results: In the effectiveness analysis concerning mRNA vaccines, we observed 850 SARS-CoV-2 infections and 201 COVID-19 related deaths among the 28110 patients during a mean follow up of 44 ± 40 days. In the effectiveness analysis concerning viral-carrier vaccines, we observed 297 SARS-CoV-2 infections and 64 COVID-19 related deaths among 12888 patients during a mean follow up of 48 ± 32 days. We observed 18.5/100-patient-year and 8.5/100-patient-year fewer infections and 5.4/100-patient-year and 5.2/100-patient-year fewer COVID-19 related deaths among patients vaccinated with mRNA and viral-carrier vaccines respectively, compared to matched unvaccinated controls. Estimated vaccine effectiveness at days 15, 30, 60 and 90 after the first dose of a mRNA vaccine was: for infection, 41.3%, 54.5%, 72.6% and 83.5% and, for death, 33.1%, 55.4%, 80.1% and 91.2%. Estimated vaccine effectiveness after the first dose of a viral-carrier vaccine was: for infection, 38.3% without increasing over time and, for death, 56.6%, 75.3%, 92.0% and 97.4%. Conclusion: In this large, real-world cohort of hemodialyzed patients, mRNA and viral-carrier COVID-19 vaccines were associated with reduced COVID-19 related mortality. Additionally, we observed a strong reduction of SARS-CoV-2 infection in hemodialysis patients receiving mRNA vaccines.