Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.544
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39117485

RESUMO

BACKGROUND AND AIMS: Triglyceride-glucose (TyG) index, a surrogate measure of insulin resistance, is associated with hypertension mediated organ damage (HMOD) and cardiovascular disease. This study investigated the association between TyG index and major adverse cardiovascular events (MACE) and its interaction with traditional risk factors and HMOD. METHODS AND RESULTS: Healthy subjects recruited from the general population were thoroughly examined and followed for MACE using nation-wide registries. Cox proportional hazard models were used to calculate the association between TyG index and MACE occurrence. Models were adjusted for Systematic Coronary Risk Evaluation (SCORE) risk factors, pulse wave velocity, left ventricular mass index, carotid atherosclerotic plaque status, and microalbuminuria. Continuous net reclassification and Harrell's Concordance index (C-index) were used to assess the added prognostic value of TyG index. During a follow-up period of mean 15.4 ± 4.7 years, MACE were observed in 332 (17%) of 1970 included participants. TyG index was associated with MACE; HR = 1.44 [95%CI:1.30-1.59] per standard deviation. After adjustment for traditional cardiovascular (CV) risk factors, HR was 1.16 [95%CI:1.03-1.31]. The association between TyG index and MACE remained significant after further adjustment for each HMOD component. However, this finding was evident only in subjects aged 41 or 51 years (HR = 1.39; 95%CI:1.15-1.69). Including TyG index in a risk model based on traditional CV risk factors improved C-index with 0.005 (P = 0.042). CONCLUSION: In this population-based study of healthy middle-aged subjects, TyG index was associated with MACE independently of traditional CV risk factors and HMOD. TyG index may have a potential role in future risk prediction systems.

2.
JMIR Diabetes ; 9: e53338, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39110490

RESUMO

BACKGROUND: Diabetic ketoacidosis (DKA) is the leading cause of morbidity and mortality in pediatric type 1 diabetes (T1D), occurring in approximately 20% of patients, with an economic cost of $5.1 billion/year in the United States. Despite multiple risk factors for postdiagnosis DKA, there is still a need for explainable, clinic-ready models that accurately predict DKA hospitalization in established patients with pediatric T1D. OBJECTIVE: We aimed to develop an interpretable machine learning model to predict the risk of postdiagnosis DKA hospitalization in children with T1D using routinely collected time-series of electronic health record (EHR) data. METHODS: We conducted a retrospective case-control study using EHR data from 1787 patients from among 3794 patients with T1D treated at a large tertiary care US pediatric health system from January 2010 to June 2018. We trained a state-of-the-art; explainable, gradient-boosted ensemble (XGBoost) of decision trees with 44 regularly collected EHR features to predict postdiagnosis DKA. We measured the model's predictive performance using the area under the receiver operating characteristic curve-weighted F1-score, weighted precision, and recall, in a 5-fold cross-validation setting. We analyzed Shapley values to interpret the learned model and gain insight into its predictions. RESULTS: Our model distinguished the cohort that develops DKA postdiagnosis from the one that does not (P<.001). It predicted postdiagnosis DKA risk with an area under the receiver operating characteristic curve of 0.80 (SD 0.04), a weighted F1-score of 0.78 (SD 0.04), and a weighted precision and recall of 0.83 (SD 0.03) and 0.76 (SD 0.05) respectively, using a relatively short history of data from routine clinic follow-ups post diagnosis. On analyzing Shapley values of the model output, we identified key risk factors predicting postdiagnosis DKA both at the cohort and individual levels. We observed sharp changes in postdiagnosis DKA risk with respect to 2 key features (diabetes age and glycated hemoglobin at 12 months), yielding time intervals and glycated hemoglobin cutoffs for potential intervention. By clustering model-generated Shapley values, we automatically stratified the cohort into 3 groups with 5%, 20%, and 48% risk of postdiagnosis DKA. CONCLUSIONS: We have built an explainable, predictive, machine learning model with potential for integration into clinical workflow. The model risk-stratifies patients with pediatric T1D and identifies patients with the highest postdiagnosis DKA risk using limited follow-up data starting from the time of diagnosis. The model identifies key time points and risk factors to direct clinical interventions at both the individual and cohort levels. Further research with data from multiple hospital systems can help us assess how well our model generalizes to other populations. The clinical importance of our work is that the model can predict patients most at risk for postdiagnosis DKA and identify preventive interventions based on mitigation of individualized risk factors.

3.
Heliyon ; 10(13): e33619, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39091940

RESUMO

Objectives: Effective exclusion of low-risk symptomatic outpatient cases for colorectal cancer (CRC) remains diagnostic challenges. We aimed to develop a self-reported symptom-based decision-making model for application in outpatient scenarios. Methods: In total, 8233 symptomatic cases at risk for CRC, as judged by outpatient physicians, were involved in this study at seven medical centers. A decision-making model was constructed using 60 self-reported symptom parameters collected from the questionnaire. Further internal and external validation cohorts were built to evaluate the discriminatory power of the CRC model. The discriminatory power of the CRC model was assessed by the C-index and calibration plot. After that, the clinical utility and user experience of the CRC model were evaluated. Results: Nine symptom parameters were identified as valuable predictors used for modeling. Internal and external validation cohorts verified the adequate discriminatory power of the CRC model. In the clinical application step, all 17 physicians found the model easy to grasp, 99.9 % of the patients were satisfied with the survey form. Application of this model detected all CRC cases. The total consistency ratio of outpatient cases undergoing colonoscopy was 81.4 %. None of the low-risk patients defined by the CRC model had been diagnosed with CRC. Conclusion: This multicenter study developed and validated a simple and user-friendly decision-making model covering self-reported information. The CRC model has been demonstrated to perform well in terms of rapid outpatient decision-making scenarios and clinical utility, particularly because it can better rule out low-risk outpatient cases.

4.
J Liver Cancer ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39099070

RESUMO

Chronic hepatitis B (CHB) infection is responsible for 40% of the global burden of hepatocellular carcinoma (HCC) with a high case fatality rate. The risk of HCC differs among CHB subjects owing to differences in host and viral factors. Modifiable risk factors include viral load, use of antiviral therapy, co-infection with other hepatotropic viruses, concomitant metabolic dysfunction-associated steatotic liver disease or diabetes mellitus, environmental exposure, and medication use. Detecting HCC at early stage improves survival, and current practice recommends HCC surveillance among individuals with cirrhosis, family history of HCC, or above an age cut-off. Ultrasonography with or without serum alpha feto-protein every 6 months is widely accepted strategy for HCC surveillance. Novel tumor-specific markers, when combined with AFP, improve diagnostic accuracy than AFP alone to detect HCC at an early stage. To predict the risk of HCC, a number of clinical risk scores have been developed but none of them are clinically implemented nor endorsed by clinical practice guidelines. Biomarkers that reflect viral transcriptional activity and degree of liver fibrosis can potentially stratify the risk of HCC, especially among subjects who are already on antiviral therapy. Ongoing exploration of these novel biomarkers is required to confirm their performance characteristics, replicability and practicability.

5.
Front Physiol ; 15: 1441107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39105083

RESUMO

[This corrects the article DOI: 10.3389/fphys.2023.1174525.].

6.
JMIR Aging ; 7: e54872, 2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39087583

RESUMO

Background: Myocardial injury after noncardiac surgery (MINS) is an easily overlooked complication but closely related to postoperative cardiovascular adverse outcomes; therefore, the early diagnosis and prediction are particularly important. Objective: We aimed to develop and validate an explainable machine learning (ML) model for predicting MINS among older patients undergoing noncardiac surgery. Methods: The retrospective cohort study included older patients who had noncardiac surgery from 1 northern center and 1 southern center in China. The data sets from center 1 were divided into a training set and an internal validation set. The data set from center 2 was used as an external validation set. Before modeling, the least absolute shrinkage and selection operator and recursive feature elimination methods were used to reduce dimensions of data and select key features from all variables. Prediction models were developed based on the extracted features using several ML algorithms, including category boosting, random forest, logistic regression, naïve Bayes, light gradient boosting machine, extreme gradient boosting, support vector machine, and decision tree. Prediction performance was assessed by the area under the receiver operating characteristic (AUROC) curve as the main evaluation metric to select the best algorithms. The model performance was verified by internal and external validation data sets with the best algorithm and compared to the Revised Cardiac Risk Index. The Shapley Additive Explanations (SHAP) method was applied to calculate values for each feature, representing the contribution to the predicted risk of complication, and generate personalized explanations. Results: A total of 19,463 eligible patients were included; among those, 12,464 patients in center 1 were included as the training set; 4754 patients in center 1 were included as the internal validation set; and 2245 in center 2 were included as the external validation set. The best-performing model for prediction was the CatBoost algorithm, achieving the highest AUROC of 0.805 (95% CI 0.778-0.831) in the training set, validating with an AUROC of 0.780 in the internal validation set and 0.70 in external validation set. Additionally, CatBoost demonstrated superior performance compared to the Revised Cardiac Risk Index (AUROC 0.636; P<.001). The SHAP values indicated the ranking of the level of importance of each variable, with preoperative serum creatinine concentration, red blood cell distribution width, and age accounting for the top three. The results from the SHAP method can predict events with positive values or nonevents with negative values, providing an explicit explanation of individualized risk predictions. Conclusions: The ML models can provide a personalized and fairly accurate risk prediction of MINS, and the explainable perspective can help identify potentially modifiable sources of risk at the patient level.


Assuntos
Aprendizado de Máquina , Complicações Pós-Operatórias , Humanos , Estudos Retrospectivos , Feminino , China/epidemiologia , Idoso , Masculino , Complicações Pós-Operatórias/epidemiologia , Complicações Pós-Operatórias/etiologia , Complicações Pós-Operatórias/diagnóstico , Pessoa de Meia-Idade , Medição de Risco/métodos , Procedimentos Cirúrgicos Operatórios/efeitos adversos
7.
Kidney Dis (Basel) ; 10(4): 274-283, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39131881

RESUMO

Introduction: The association between the longitudinal patterns of estimated glomerular filtration rate (eGFR) and risk of atrial fibrillation (AF) in populations with normal or mildly impaired renal function is not well characterized. We sought to explore the eGFR trajectories in populations with normal or mildly impaired renal function and their association with AF. Methods: This prospective cohort study included 62,407 participants who were free of AF, cardiovascular diseases, and moderate to severe renal insufficiency (eGFR <60 mL/min/1.73 m2) before 2010. The eGFR trajectories were developed using latent mixture modeling based on examination data in 2006, 2008, and 2010. Incident AF cases were identified in biennial electrocardiogram assessment and a review of medical insurance data and discharge registers. We used Cox regression models to estimate the hazard ratios and 95% confidence intervals (CIs) for incident AF. Results: According to survey results for the range and changing pattern of eGFR during 2006-2010, four trajectories were identified: high-stable (range, 107.47-110.25 mL/min/1.73 m2; n = 11,719), moderate-increasing (median increase from 83.83 to 100.37 mL/min/1.73 m2; n = 22,634), high-decreasing (median decrease from 101.72 to 89.10 mL/min/1.73 m2; n = 7,943), and low-stable (range, 73.48-76.78 mL/min/1.73 m2; n = 20,111). After an average follow-up of 9.63 years, a total of 485 cases of AF were identified. Compared with the high-stable trajectory, the adjusted hazard ratios of AF were 1.70 (95% CI, 1.09-2.66) for the moderate-increasing trajectory, 1.92 (95% CI, 1.18-3.13) for the high-decreasing trajectory, and 2.28 (95% CI, 1.46-3.56) for the low-stable trajectory. The results remained consistent across a number of sensitivity analyses. Conclusion: The trajectories of eGFR were associated with subsequent AF risk in populations with normal or mildly impaired renal function.


The relation between estimated glomerular filtration rate (eGFR) within the normal or mildly impaired range and risk of atrial fibrillation (AF) in former studies is controversial, and data on longitudinal pattern of eGFR in such topic is sparse. In this cohort study, we identified 4 trajectories of eGFR in populations with normal or mildly impaired renal function. Relative to populations with high-stable pattern of eGFR, those with low-stable pattern, high-decreasing pattern and moderate-increasing pattern were associated with 128%, 92%, and 70% higher risk of AF, respectively. These findings suggested that monitoring eGFR trajectories is an important approach for AF prediction in populations with normal or mildly impaired renal function. Decreasing and consistently low eGFR trajectories within the currently designated normal or mildly impaired range may still significantly increase the risk of AF.

8.
Risk Manag Healthc Policy ; 17: 1959-1972, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39156077

RESUMO

Purpose: This study aimed to develop an integrative dynamic nomogram, including N-terminal pro-B type natural peptide (NT-proBNP) and estimated glomerular filtration rate (eGFR), for predicting the risk of all-cause mortality in HFmrEF patients. Patients and Methods: 790 HFmrEF patients were prospectively enrolled in the development cohort for the model. The least absolute shrinkage and selection operator (LASSO) regression and Random Survival Forest (RSF) were employed to select predictors for all-cause mortality. Develop a nomogram based on the Cox proportional hazard model for predicting long-term mortality (1-, 3-, and 5-year) in HFmrEF. Internal validation was conducted using Bootstrap, and the final model was validated in an external cohort of 338 consecutive adult patients. Discrimination and predictive performance were evaluated by calculating the time-dependent concordance index (C-index), area under the ROC curve (AUC), and calibration curve, with clinical value assessed via decision curve analysis (DCA). Integrated discrimination improvement (IDI) and net reclassification improvement (NRI) were used to assess the contributions of NT-proBNP and eGFR to the nomogram. Finally, develop a dynamic nomogram using the "Dynnom" package. Results: The optimal independent predictors for all-cause mortality (APSELNH: A: angiotensin-converting enzyme inhibitors/angiotensin receptor blockers/angiotensin receptor-neprilysin inhibitor (ACEI/ARB/ARNI), P: percutaneous coronary intervention/coronary artery bypass graft (PCI/CABG), S: stroke, E: eGFR, L: lg of NT-proBNP, N: NYHA, H: healthcare) were incorporated into the dynamic nomogram. The C-index in the development cohort and validation cohort were 0.858 and 0.826, respectively, with AUCs exceeding 0.8, indicating good discrimination and predictive ability. DCA curves and calibration curves demonstrated clinical applicability and good consistency of the nomogram. NT-proBNP and eGFR provided significant net benefits to the nomogram. Conclusion: In this study, the dynamic APSELNH nomogram developed serves as an accessible, functional, and effective clinical decision support calculator, offering accurate prognostic assessment for patients with HFmrEF.

9.
Fundam Res ; 4(4): 752-760, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39156563

RESUMO

The potential for being able to identify individuals at high disease risk solely based on genotype data has garnered significant interest. Although widely applied, traditional polygenic risk scoring methods fall short, as they are built on additive models that fail to capture the intricate associations among single nucleotide polymorphisms (SNPs). This presents a limitation, as genetic diseases often arise from complex interactions between multiple SNPs. To address this challenge, we developed DeepRisk, a biological knowledge-driven deep learning method for modeling these complex, nonlinear associations among SNPs, to provide a more effective method for scoring the risk of common diseases with genome-wide genotype data. Evaluations demonstrated that DeepRisk outperforms existing PRS-based methods in identifying individuals at high risk for four common diseases: Alzheimer's disease, inflammatory bowel disease, type 2 diabetes, and breast cancer.

10.
Accid Anal Prev ; 207: 107748, 2024 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-39159592

RESUMO

Driving risk prediction emerges as a pivotal technology within the driving safety domain, facilitating the formulation of targeted driving intervention strategies to enhance driving safety. The driving safety undergoes continuous evolution in response to the complexities of the traffic environment, representing a dynamic and ongoing serialization process. The evolutionary trend of this sequence offers valuable information pertinent to driving safety research. However, existing research on driving risk prediction has primarily concentrated on forecasting a single index, such as the driving safety level or the extreme value within a specified future timeframe. This approach often neglects the intrinsic properties that characterize the temporal evolution of driving safety. Leveraging the high-D natural driving dataset, this study employs the multi-step time series forecasting methodology to predict the risk evolution sequence throughout the car-following process, elucidates the benefits of the multi-step time series forecasting approach, and contrasts the predictive efficacy on driving safety levels across various temporal windows. The empirical findings demonstrate that the time series prediction model proficiently captures essential dynamics such as risk evolution trends, amplitudes, and turning points. Consequently, it provides predictions that are significantly more robust and comprehensive than those obtained from a single risk index. The TsLeNet proposed in this study integrates a 2D convolutional network architecture with a dual attention mechanism, adeptly capturing and synthesizing multiple features across time steps. This integration significantly enhances the prediction precision at each temporal interval. Comparative analyses with other mainstream models reveal that TsLeNet achieves the best performance in terms of prediction accuracy and efficiency. Concurrently, this research undertakes a comprehensive analysis of the temporal distribution of errors, the impact pattern of features on risk sequence, and the applicability of interaction features among surrounding vehicles. The adoption of multi-step time series forecasting approach not only offers a novel perspective for analyzing and exploring driving safety, but also furnishes the design and development of targeted driving intervention systems.

11.
Sensors (Basel) ; 24(15)2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39124080

RESUMO

Hypertension is a major risk factor for many serious diseases. With the aging population and lifestyle changes, the incidence of hypertension continues to rise, imposing a significant medical cost burden on patients and severely affecting their quality of life. Early intervention can greatly reduce the prevalence of hypertension. Research on hypertension early warning models based on electronic health records (EHRs) is an important and effective method for achieving early hypertension warning. However, limited by the scarcity and imbalance of multivisit records, and the nonstationary characteristics of hypertension features, it is difficult to predict the probability of hypertension prevalence in a patient effectively. Therefore, this study proposes an online hypertension monitoring model (HRP-OG) based on reinforcement learning and generative feature replay. It transforms the hypertension prediction problem into a sequential decision problem, achieving risk prediction of hypertension for patients using multivisit records. Sensors embedded in medical devices and wearables continuously capture real-time physiological data such as blood pressure, heart rate, and activity levels, which are integrated into the EHR. The fit between the samples generated by the generator and the real visit data is evaluated using maximum likelihood estimation, which can reduce the adversarial discrepancy between the feature space of hypertension and incoming incremental data, and the model is updated online based on real-time data using generative feature replay. The incorporation of sensor data ensures that the model adapts dynamically to changes in the condition of patients, facilitating timely interventions. In this study, the publicly available MIMIC-III data are used for validation, and the experimental results demonstrate that compared to existing advanced methods, HRP-OG can effectively improve the accuracy of hypertension risk prediction for few-shot multivisit record in nonstationary environments.


Assuntos
Hipertensão , Humanos , Hipertensão/epidemiologia , Hipertensão/diagnóstico , Pressão Sanguínea/fisiologia , Registros Eletrônicos de Saúde , Frequência Cardíaca/fisiologia , Fatores de Risco , Algoritmos , Dispositivos Eletrônicos Vestíveis
12.
J Clin Med ; 13(15)2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39124648

RESUMO

Background: Cardiovascular disease (CVD) primary prevention guidelines classify people at high risk and recommended for pharmacological treatment based on clinical criteria and absolute CVD risk estimation. Despite relying on similar evidence, recommendations vary between international guidelines, which may impact who is recommended to receive treatment for CVD prevention. Objective: To determine the agreement in treatment recommendations according to guidelines from Australia, England and the United States. Methods: Cross-sectional analysis of the National Health and Nutrition Examination Survey (n = 2647). Adults ≥ 40 years were classified as high-risk and recommended for treatment according to Australia, England and United States CVD prevention guidelines. Agreement in high-risk classification and recommendation for treatment was assessed by Kappa statistic. Results: Participants were middle aged, 49% were male and 38% were white. The proportion recommended for treatment was highest using the United States guidelines (n = 1318, 49.8%) followed by the English guidelines (n = 1276, 48.2%). In comparison, only 26.6% (n = 705) of participants were classified as recommended for treatment according to the Australian guidelines. There was moderate agreement in the recommendation for treatment between the English and United States guidelines (κ = 0.69 [0.64-0.74]). In comparison, agreement in recommendation for treatment was minimal between the Australian and United States guidelines (κ = 0.47 [0.43-0.52]) and weak between the Australian and English guidelines (κ = 0.50 [0.45-0.55]). Conclusions: Despite similar evidence underpinning guidelines, there is little agreement between guidelines regarding the people recommended to receive treatment for CVD prevention. These findings suggest greater consistency in high-risk classification between CVD prevention guidelines may be required.

13.
JACC Adv ; 3(8): 101095, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39135918

RESUMO

Background: Maternal mortality in the United States remains high, with cardiovascular (CV) complications being a leading cause. Objectives: The purpose of this paper was to develop the PARCCS (Prediction of Acute Risk for Cardiovascular Complications in the Peripartum Period Score) for acute CV complications during delivery. Methods: Data from the National Inpatient Sample (2016-2020) and International Classification of Diseases, Tenth Revision codes to identify delivery admissions were used. Acute CV/renal complications were defined as a composite of pre-eclampsia/eclampsia, peripartum cardiomyopathy, renal complications, venous thromboembolism, arrhythmias, and pulmonary edema. A risk prediction model, PARCCS, was developed using machine learning consisting of 14 variables and scored out of 100 points. Results: Of the 2,371,661 pregnant patients analyzed, 7.0% had acute CV complications during delivery hospitalization. Patients with CV complications had a higher prevalence of comorbidities and were more likely to be of Black race and lower income. The PARCCS variables included electrolyte imbalances (13 points [p]), age (3p for age <20 years), cesarean delivery (4p), obesity (5p), pre-existing heart failure (28p), multiple gestations (4p), Black race (2p), gestational hypertension (3p), low income (1p), gestational diabetes (2p), chronic diabetes (6p), prior stroke (22p), coagulopathy (5p), and nonelective admission (2p). Using the validation set, the performance of the model was evaluated with an area under the receiver-operating characteristic curve of 0.68 and a 95% CI of 0.67 to 0.68. Conclusions: PARCCS has the potential to be an important tool for identifying pregnant individuals at risk of acute peripartum CV complications at the time of delivery. Future studies should further validate this score and determine whether it can improve patient outcomes.

14.
Front Immunol ; 15: 1437980, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39136015

RESUMO

Background: Sarcopenia is linked to an unfavorable prognosis in individuals with rheumatoid arthritis (RA). Early identification and treatment of sarcopenia are clinically significant. This study aimed to create and validate a nomogram for predicting sarcopenia risk in RA patients, providing clinicians with a reliable tool for the early identification of high-risk patients. Methods: Patients with RA diagnosed between August 2022 and January 2024 were included and randomized into training and validation sets in a 7:3 ratio. Least Absolute Shrinkage and Selection Operator (LASSO) regression analysis and multifactorial logistic regression analysis were used to screen the risk variables for RA-associated muscle loss and to create an RA sarcopenia risk score. The predictive performance and clinical utility of the risk model were evaluated by plotting the receiver operating characteristic curve and calculating the area under the curve (AUC), along with the calibration curve and clinical decision curve (DCA). Results: A total of 480 patients with RA were included in the study (90% female, with the largest number in the 45-59 age group, about 50%). In this study, four variables (body mass index, disease duration, hemoglobin, and grip strength) were included to construct a nomogram for predicting RA sarcopenia. The training and validation set AUCs were 0.915 (95% CI: 0.8795-0.9498) and 0.907 (95% CI: 0.8552-0.9597), respectively, proving that the predictive model was well discriminated. The calibration curve showed that the predicted values of the model were basically in line with the actual values, demonstrating good calibration. The DCA indicated that almost the entire range of patients with RA can benefit from this novel prediction model, suggesting good clinical utility. Conclusion: This study developed and validated a nomogram prediction model to predict the risk of sarcopenia in RA patients. The model can assist clinicians in enhancing their ability to screen for RA sarcopenia, assess patient prognosis, make early decisions, and improve the quality of life for RA patients.


Assuntos
Artrite Reumatoide , Nomogramas , Sarcopenia , Humanos , Artrite Reumatoide/complicações , Sarcopenia/diagnóstico , Sarcopenia/etiologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Medição de Risco , Fatores de Risco , Prognóstico , Adulto , Curva ROC , Reprodutibilidade dos Testes
15.
Australas Psychiatry ; : 10398562241269171, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39137045

RESUMO

OBJECTIVE: To examine the accuracy and likely clinical usefulness of the Psychosis Metabolic Risk Calculator (PsyMetRiC) in predicting up-to six-year risk of incident metabolic syndrome in an Australian sample of young people with first-episode psychosis. METHOD: We conducted a retrospective study at a secondary care early psychosis treatment service among people aged 16-35 years, extracting relevant data at the time of antipsychotic commencement and between one-to-six-years later. We assessed algorithm accuracy primarily via discrimination (C-statistic), calibration (calibration plots) and clinical usefulness (decision curve analysis). Model updating and recalibration generated a site-specific (Australian) PsyMetRiC version. RESULTS: We included 116 people with baseline and follow-up data: 73% male, mean age 20.1 years, mean follow-up 2.6 years, metabolic syndrome prevalence 13%. C-statistics for both partial- (C = 0.71, 95% CI 0.64-0.75) and full-models (C = 0.72, 95% CI 0.65-0.77) were acceptable; however, calibration plots demonstrated consistent under-prediction of risk. Recalibration and updating led to slightly improved C-statistics, greatly improved agreement between observed and predicted risk, and a narrow window of likely clinical usefulness improved significantly. CONCLUSION: An updated and recalibrated PsyMetRiC model, PsyMetRiC-Australia, shows promise. Validation in a large sample is required to confirm its accuracy and clinical usefulness for the Australian population.

16.
Front Oncol ; 14: 1419633, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39161387

RESUMO

Background: Numerous studies have developed or validated prediction models to estimate the likelihood of postoperative pneumonia (POP) in esophageal cancer (EC) patients. The quality of these models and the evaluation of their applicability to clinical practice and future research remains unknown. This study systematically evaluated the risk of bias and applicability of risk prediction models for developing POP in patients undergoing esophageal cancer surgery. Methods: PubMed, Embase, Web of Science, Cochrane Library, Cumulative Index to Nursing and Allied Health Literature (CINAHL), China National Knowledge Infrastructure (CNKI), China Science and Technology Journal Database (VIP), WanFang Database and Chinese Biomedical Literature Database were searched from inception to March 12, 2024. Two investigators independently screened the literature and extracted data. The Prediction Model Risk of Bias Assessment Tool (PROBAST) checklist was employed to evaluate both the risk of bias and applicability. Result: A total of 14 studies involving 23 models were included. These studies were mainly published between 2014 and 2023. The applicability of all studies was good. However, all studies exhibited a high risk of bias, primarily attributed to inappropriate data sources, insufficient sample size, irrational treatment of variables and missing data, and lack of model validation. The incidence of POP in patients undergoing esophageal cancer surgery ranged from 14.60% to 39.26%. The most frequently used predictors were smoking, age, chronic obstructive pulmonary disease(COPD), diabetes mellitus, and methods of thoracotomy. Inter-model discrimination ranged from 0.627 to 0.850, sensitivity ranged between 60.7% and 84.0%, and specificity ranged from 59.1% to 83.9%. Conclusion: In all included studies, good discrimination was reported for risk prediction models for POP in patients undergoing esophageal cancer surgery, indicating stable model performance. However, according to the PROBAST checklist, all studies had a high risk of bias. Future studies should use the predictive model assessment tool to improve study design and develop new models with larger samples and multicenter external validation. Systematic review registration: https://www.crd.york.ac.uk/prospero, identifier CRD42024527085.

17.
Food Sci Nutr ; 12(8): 5530-5537, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39139971

RESUMO

A healthy diet is dominant in cardiovascular disease (CVD) prevention. Inflammation is pivotal for CVD development. This study aimed to evaluate the association between the pro-inflammatory diet and the CVD risk. This cross-sectional study involved 10,138 Fasa adult cohort study participants. After excluding participants with missing data, the Energy-Adjusted Dietary Inflammatory Index (E-DII) was calculated to assess the inflammatory potential of diet using the recorded Food Frequency Questionnaire. Framingham risk score (FRS) was used to predict the 10-year risk of CVD. The association between E-DII and high risk for CVD was investigated using multinominal regression. After exclusion, the mean age of studied individuals (n = 10,030) was 48.6 ± 9.6 years, including 4522 men. Most participants were low risk (FRS <10%) for CVD (87.6%), while 2.7% of them were high risk (FRS ≥20%). The median FRS was 2.80 (1.70, 6.30). The E-DII ranged from -4.22 to 4.49 (mean E-DII = 0.880 ± 1.127). E-DII was significantly associated with FRS. This result persisted after adjusting for confounding factors and in both genders. This study revealed that the pro-inflammatory diet significantly increases the CVD risk. Consequently, reducing the inflammatory potential of diet should be considered an effective dietary intervention in CVD prevention.

18.
J Med Internet Res ; 26: e48997, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39141914

RESUMO

BACKGROUND:  Preeclampsia is a potentially fatal complication during pregnancy, characterized by high blood pressure and the presence of excessive proteins in the urine. Due to its complexity, the prediction of preeclampsia onset is often difficult and inaccurate. OBJECTIVE:  This study aimed to create quantitative models to predict the onset gestational age of preeclampsia using electronic health records. METHODS:  We retrospectively collected 1178 preeclamptic pregnancy records from the University of Michigan Health System as the discovery cohort, and 881 records from the University of Florida Health System as the validation cohort. We constructed 2 Cox-proportional hazards models: 1 baseline model using maternal and pregnancy characteristics, and the other full model with additional laboratory findings, vitals, and medications. We built the models using 80% of the discovery data, tested the remaining 20% of the discovery data, and validated with the University of Florida data. We further stratified the patients into high- and low-risk groups for preeclampsia onset risk assessment. RESULTS:  The baseline model reached Concordance indices of 0.64 and 0.61 in the 20% testing data and the validation data, respectively, while the full model increased these Concordance indices to 0.69 and 0.61, respectively. For preeclampsia diagnosed at 34 weeks, the baseline and full models had area under the curve (AUC) values of 0.65 and 0.70, and AUC values of 0.69 and 0.70 for preeclampsia diagnosed at 37 weeks, respectively. Both models contain 5 selective features, among which the number of fetuses in the pregnancy, hypertension, and parity are shared between the 2 models with similar hazard ratios and significant P values. In the full model, maximum diastolic blood pressure in early pregnancy was the predominant feature. CONCLUSIONS:  Electronic health records data provide useful information to predict the gestational age of preeclampsia onset. Stratification of the cohorts using 5-predictor Cox-proportional hazards models provides clinicians with convenient tools to assess the onset time of preeclampsia in patients.


Assuntos
Registros Eletrônicos de Saúde , Pré-Eclâmpsia , Humanos , Feminino , Gravidez , Registros Eletrônicos de Saúde/estatística & dados numéricos , Adulto , Estudos Retrospectivos , Modelos de Riscos Proporcionais , Idade Gestacional
19.
J Infect Public Health ; 17(9): 102514, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39142081

RESUMO

BACKGROUND: Public health threats can significantly impact mass gatherings and enhancing surveillance systems would thus be crucial. Epidemic Intelligence from Open Sources (EIOS) was introduced to Qatar to complement the existing surveillance measures in preparation to the FIFA World Cup Qatar 2022 (FWC22). This study estimated the empirical probability of EIOS detecting signals of public health relevance. It also looked at the factors responsible for discerning a moderate-high risk signal during a mass gathering event. METHODS: This cross-sectional descriptive study used data collected between November 8th and December 25th, 2022, through an EIOS dashboard that filtered open-source articles using specific keywords. Triage criteria and scoring scheme were developed to capture signals and these were maintained in MS Excel. EIOS' contribution to epidemic intelligence was assessed by the empirical probability estimation of relevant public health signals. Chi-squared tests of independence were performed to check for associations between various hazard categories and other independent variables. A multivariate logistic regression evaluated the predictors of moderate-high risk signals that required prompt action. RESULTS: The probability of EIOS capturing a signal relevant to public health was estimated at 0.85 % (95 % confidence interval (CI) [0.82 %-0.88 %]) with three signals requiring a national response. The hazard category of the signal had significant association to the region of occurrence (χ2 (5, N = 2543) = 1021.6, p < .001). The hazard category also showed significant association to its detection during matchdays of the tournament (χ2 (5, N = 2543) = 11.2, p < .05). The triage criteria developed was able to discern between low and moderate-high risk signals with an acceptable discrimination (Area Under the Curve=0.79). CONCLUSION: EIOS proved useful in the early warning of public health threats.

20.
Hematol Oncol ; 42(5): e3302, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39096249

RESUMO

To retrospectively analyze whether the second revision of the international staging system (R2-ISS) influenced prognosis at treatment initiation in patients with multiple myeloma (MM) receiving anti-CD38 antibody-based triplet treatments. High-risk chromosomal abnormalities were examined from diagnosis to treatment initiation and considered positive if detected once. R2-ISS was recalculated at the initiation of treatment and defined as "dynamic R2-ISS." Data from 150 patients who underwent the defined treatments were analyzed. The median progression-free survival (PFS) was 19.5 months, and the median overall survival (OS) was 36.5 months. Dynamic R2-ISS significantly stratified prognoses for both PFS and OS. The median PFS for patients with dynamic R2-ISS IV was 3.3 months, and the median OS was 11.7 months, indicating extremely poor outcomes. Although the Revised International Staging System (R-ISS) calculated at the initiation of treatment significantly stratified treatment outcomes, the patients classified as R-ISS could be further stratified by R2-ISS to provide better prognostic information. Dynamic R2-ISS showed potential as a prognostic tool in patients with MM who are treated with anti-CD38 antibody-based triplet therapies.


Assuntos
ADP-Ribosil Ciclase 1 , Mieloma Múltiplo , Humanos , Mieloma Múltiplo/tratamento farmacológico , Mieloma Múltiplo/mortalidade , Mieloma Múltiplo/terapia , Mieloma Múltiplo/patologia , Masculino , Feminino , ADP-Ribosil Ciclase 1/antagonistas & inibidores , Pessoa de Meia-Idade , Idoso , Prognóstico , Estudos Retrospectivos , Adulto , Idoso de 80 Anos ou mais , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapêutico , Estadiamento de Neoplasias , Taxa de Sobrevida , Glicoproteínas de Membrana
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA