Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros

Base de dados
Assunto principal
Tipo de documento
Intervalo de ano de publicação
1.
Front Med (Lausanne) ; 10: 1146529, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37534322

RESUMO

Purpose: To explore and validate the utility of machine learning (ML) methods using a limited sample size to predict changes in visual acuity and keratometry 2 years following corneal crosslinking (CXL) for progressive keratoconus. Methods: The study included all consecutive patients with progressive keratoconus who underwent CXL from July 2014 to December 2020, with a 2 year follow-up period before July 2022 to develop the model. Variables collected included patient demographics, visual acuity, spherical equivalence, and Pentacam parameters. Available case data were divided into training and testing data sets. Three ML models were evaluated based on their performance in predicting case corrected distance visual acuity (CDVA) and maximum keratometry (Kmax) changes compared to actual values, as indicated by average root mean squared error (RMSE) and R-squared (R2) values. Patients followed from July 2022 to December 2022 were included in the validation set. Results: A total of 277 eyes from 195 patients were included in training and testing sets and 43 eyes from 35 patients were included in the validation set. The baseline CDVA (26.7%) and the ratio of steep keratometry to flat keratometry (K2/K1; 13.8%) were closely associated with case CDVA changes. The baseline ratio of Kmax to mean keratometry (Kmax/Kmean; 20.9%) was closely associated with case Kmax changes. Using these metrics, the best-performing ML model was XGBoost, which produced predicted values closest to the actual values for both CDVA and Kmax changes in testing set (R2 = 0.9993 and 0.9888) and validation set (R2 = 0.8956 and 0.8382). Conclusion: Application of a ML approach using XGBoost, and incorporation of identifiable parameters, considerably improved variation prediction accuracy of both CDVA and Kmax 2 years after CXL for treatment of progressive keratoconus.

2.
Front Public Health ; 11: 1150095, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37143970

RESUMO

Background: The global COVID-19 pandemic is still ongoing, and cross-country and cross-period variation in COVID-19 age-adjusted case fatality rates (CFRs) has not been clarified. Here, we aimed to identify the country-specific effects of booster vaccination and other features that may affect heterogeneity in age-adjusted CFRs with a worldwide scope, and to predict the benefit of increasing booster vaccination rate on future CFR. Method: Cross-temporal and cross-country variations in CFR were identified in 32 countries using the latest available database, with multi-feature (vaccination coverage, demographic characteristics, disease burden, behavioral risks, environmental risks, health services and trust) using Extreme Gradient Boosting (XGBoost) algorithm and SHapley Additive exPlanations (SHAP). After that, country-specific risk features that affect age-adjusted CFRs were identified. The benefit of booster on age-adjusted CFR was simulated by increasing booster vaccination by 1-30% in each country. Results: Overall COVID-19 age-adjusted CFRs across 32 countries ranged from 110 deaths per 100,000 cases to 5,112 deaths per 100,000 cases from February 4, 2020 to Jan 31, 2022, which were divided into countries with age-adjusted CFRs higher than the crude CFRs and countries with age-adjusted CFRs lower than the crude CFRs (n = 9 and n = 23) when compared with the crude CFR. The effect of booster vaccination on age-adjusted CFRs becomes more important from Alpha to Omicron period (importance scores: 0.03-0.23). The Omicron period model showed that the key risk factors for countries with higher age-adjusted CFR than crude CFR are low GDP per capita and low booster vaccination rates, while the key risk factors for countries with higher age-adjusted CFR than crude CFR were high dietary risks and low physical activity. Increasing booster vaccination rates by 7% would reduce CFRs in all countries with age-adjusted CFRs higher than the crude CFRs. Conclusion: Booster vaccination still plays an important role in reducing age-adjusted CFRs, while there are multidimensional concurrent risk factors and precise joint intervention strategies and preparations based on country-specific risks are also essential.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , COVID-19/prevenção & controle , Pandemias , Fatores de Risco , Efeitos Psicossociais da Doença , Vacinação
3.
Clin Transl Radiat Oncol ; 39: 100590, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36935854

RESUMO

Head and neck radiotherapy induces important toxicity, and its efficacy and tolerance vary widely across patients. Advancements in radiotherapy delivery techniques, along with the increased quality and frequency of image guidance, offer a unique opportunity to individualize radiotherapy based on imaging biomarkers, with the aim of improving radiation efficacy while reducing its toxicity. Various artificial intelligence models integrating clinical data and radiomics have shown encouraging results for toxicity and cancer control outcomes prediction in head and neck cancer radiotherapy. Clinical implementation of these models could lead to individualized risk-based therapeutic decision making, but the reliability of the current studies is limited. Understanding, validating and expanding these models to larger multi-institutional data sets and testing them in the context of clinical trials is needed to ensure safe clinical implementation. This review summarizes the current state of the art of machine learning models for prediction of head and neck cancer radiotherapy outcomes.

4.
Comput Struct Biotechnol J ; 21: 769-779, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36698972

RESUMO

Understanding genes and their underlying mechanisms is critical in deciphering how antimicrobial-resistant (AMR) bacteria withstand detrimental effects of antibiotic drugs. At the same time the genes related to AMR phenotypes may also serve as biomarkers for predicting whether a microbial strain is resistant to certain antibiotic drugs. We developed a Cross-Validated Feature Selection (CVFS) approach for robustly selecting the most parsimonious gene sets for predicting AMR activities from bacterial pan-genomes. The core idea behind the CVFS approach is interrogating features among non-overlapping sub-parts of the datasets to ensure the representativeness of the features. By randomly splitting the dataset into disjoint sub-parts, conducting feature selection within each sub-part, and intersecting the features shared by all sub-parts, the CVFS approach is able to achieve the goal of extracting the most representative features for yielding satisfactory AMR activity prediction accuracy. By testing this idea on bacterial pan-genome datasets, we showed that this approach was able to extract the most succinct feature sets that predicted AMR activities very well, indicating the potential of these genes as AMR biomarkers. The functional analysis demonstrated that the CVFS approach was able to extract both known AMR genes and novel ones, suggesting the capabilities of the algorithm in selecting relevant features and highlighting the potential of the novel genes in expanding the antimicrobial resistance gene databases.

6.
JTCVS Open ; 11: 214-228, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36172420

RESUMO

Objective: We sought to several develop parsimonious machine learning models to predict resource utilization and clinical outcomes following cardiac operations using only preoperative factors. Methods: All patients undergoing coronary artery bypass grafting and/or valve operations were identified in the 2015-2021 University of California Cardiac Surgery Consortium repository. The primary end point of the study was length of stay (LOS). Secondary endpoints included 30-day mortality, acute kidney injury, reoperation, postoperative blood transfusion and duration of intensive care unit admission (ICU LOS). Linear regression, gradient boosted machines, random forest, extreme gradient boosting predictive models were developed. The coefficient of determination and area under the receiver operating characteristic (AUC) were used to compare models. Important predictors of increased resource use were identified using SHapley summary plots. Results: Compared with all other modeling strategies, gradient boosted machines demonstrated the greatest performance in the prediction of LOS (coefficient of determination, 0.42), ICU LOS (coefficient of determination, 0.23) and 30-day mortality (AUC, 0.69). Advancing age, reduced hematocrit, and multiple-valve procedures were associated with increased LOS and ICU LOS. Furthermore, the gradient boosted machine model best predicted acute kidney injury (AUC, 0.76), whereas random forest exhibited greatest discrimination in the prediction of postoperative transfusion (AUC, 0.73). We observed no difference in performance between modeling strategies for reoperation (AUC, 0.80). Conclusions: Our findings affirm the utility of machine learning in the estimation of resource use and clinical outcomes following cardiac operations. We identified several risk factors associated with increased resource use, which may be used to guide case scheduling in times of limited hospital capacity.

7.
Smart Health (Amst) ; 26: 100323, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36159078

RESUMO

The large amount of data generated during the COVID-19 pandemic requires advanced tools for the long-term prediction of risk factors associated with COVID-19 mortality with higher accuracy. Machine learning (ML) methods directly address this topic and are essential tools to guide public health interventions. Here, we used ML to investigate the importance of demographic and clinical variables on COVID-19 mortality. We also analyzed how comorbidity networks are structured according to age groups. We conducted a retrospective study of COVID-19 mortality with hospitalized patients from Londrina, Parana, Brazil, registered in the database for severe acute respiratory infections (SIVEP-Gripe), from January 2021 to February 2022. We tested four ML models to predict the COVID-19 outcome: Logistic Regression, Support Vector Machine, Random Forest, and XGBoost. We also constructed a comorbidity network to investigate the impact of co-occurring comorbidities on COVID-19 mortality. Our study comprised 8358 hospitalized patients, of whom 2792 (33.40%) died. The XGBoost model achieved excellent performance (ROC-AUC = 0.90). Both permutation method and SHAP values highlighted the importance of age, ventilatory support status, and intensive care unit admission as key features in predicting COVID-19 outcomes. The comorbidity networks for old deceased patients are denser than those for young patients. In addition, the co-occurrence of heart disease and diabetes may be the most important combination to predict COVID-19 mortality, regardless of age and sex. This work presents a valuable combination of machine learning and comorbidity network analysis to predict COVID-19 outcomes. Reliable evidence on this topic is crucial for guiding the post-pandemic response and assisting in COVID-19 care planning and provision.

8.
Adv Eng Softw ; 173: 103212, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35936352

RESUMO

The establishment of fuzzy relations and the fuzzification of time series are the top priorities of the model for predicting fuzzy time series. A lot of literature studied these two aspects to ameliorate the capability of the forecasting model. In this paper, we proposed a new method(FTSOAX) to forecast fuzzy time series derived from the improved seagull optimization algorithm(ISOA) and XGBoost. For increasing the accurateness of the forecasting model in fuzzy time series, ISOA is applied to partition the domain of discourse to get more suitable intervals. We improved the seagull optimization algorithm(SOA) with the help of the Powell algorithm and a random curve action to make SOA have better convergence ability. Using XGBoost to forecast the change of fuzzy membership in order to overcome the disadvantage that fuzzy relation leads to low accuracy. We obtained daily confirmed COVID-19 cases in 7 countries as a dataset to demonstrate the performance of FTSOAX. The results show that FTSOAX is superior to other fuzzy forecasting models in the application of prediction of COVID-19 daily confirmed cases.

9.
Front Med (Lausanne) ; 9: 837232, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35372378

RESUMO

Background and Objectives: Chronic kidney disease progression to ESKD is associated with a marked increase in mortality and morbidity. Its progression is highly variable and difficult to predict. Methods: This is an observational, retrospective, single-centre study. The cohort was patients attending hospital and nephrology clinic at The Canberra Hospital from September 1996 to March 2018. Demographic data, vital signs, kidney function test, proteinuria, and serum glucose were extracted. The model was trained on the featurised time series data with XGBoost. Its performance was compared against six nephrologists and the Kidney Failure Risk Equation (KFRE). Results: A total of 12,371 patients were included, with 2,388 were found to have an adequate density (three eGFR data points in the first 2 years) for subsequent analysis. Patients were divided into 80%/20% ratio for training and testing datasets.ML model had superior performance than nephrologist in predicting ESKD within 2 years with 93.9% accuracy, 60% sensitivity, 97.7% specificity, 75% positive predictive value. The ML model was superior in all performance metrics to the KFRE 4- and 8-variable models.eGFR and glucose were found to be highly contributing to the ESKD prediction performance. Conclusions: The computational predictions had higher accuracy, specificity and positive predictive value, which indicates the potential integration into clinical workflows for decision support.

10.
Inform Med Unlocked ; 30: 100908, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35280933

RESUMO

Introduction: The Coronavirus 2019 (COVID-19) epidemic stunned the health systems with severe scarcities in hospital resources. In this critical situation, decreasing COVID-19 readmissions could potentially sustain hospital capacity. This study aimed to select the most affecting features of COVID-19 readmission and compare the capability of Machine Learning (ML) algorithms to predict COVID-19 readmission based on the selected features. Material and methods: The data of 5791 hospitalized patients with COVID-19 were retrospectively recruited from a hospital registry system. The LASSO feature selection algorithm was used to select the most important features related to COVID-19 readmission. HistGradientBoosting classifier (HGB), Bagging classifier, Multi-Layered Perceptron (MLP), Support Vector Machine ((SVM) kernel = linear), SVM (kernel = RBF), and Extreme Gradient Boosting (XGBoost) classifiers were used for prediction. We evaluated the performance of ML algorithms with a 10-fold cross-validation method using six performance evaluation metrics. Results: Out of the 42 features, 14 were identified as the most relevant predictors. The XGBoost classifier outperformed the other six ML models with an average accuracy of 91.7%, specificity of 91.3%, the sensitivity of 91.6%, F-measure of 91.8%, and AUC of 0.91%. Conclusion: The experimental results prove that ML models can satisfactorily predict COVID-19 readmission. Besides considering the risk factors prioritized in this work, categorizing cases with a high risk of reinfection can make the patient triaging procedure and hospital resource utilization more effective.

11.
JACC Asia ; 2(7): 819-828, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36713754

RESUMO

Background: Extracorporeal membrane oxygenation (ECMO) has been used as intraoperative hemodynamic support in patients with end-stage lung diseases and pulmonary hypertension undergoing lung transplantation (LT). Objectives: The aim of this study was to identify the association of pulmonary artery pressure change during ECMO and post-LT survival. Methods: The study investigators collected and analyzed the data from Chinese Lung Transplantation Registry. Patients who have severe pulmonary hypertension with intraoperative ECMO support were enrolled. Post-LT mortality and morbidity were further collected and compared. Results: A total of 208 recipients were included in the study, during which 53 deaths occurred post-LT. All the patients had severe pulmonary hypertension and were supported by intraoperative ECMO. Using eXtreme Gradient Boosting, or XGboost, model method, 20 variables were selected and ranked. Changes of mean pulmonary artery pressure at the time of ECMO support and ECMO wean-off (ΔmPAP) were related to post-LT survival, after adjusting for potential confounders (recipient age, New York Heart Association functional class status before LT, body mass index, pre-LT hypertension, pre-LT steroids, and pre-LT ECMO bridging). A nonlinear relationship was detected between ΔmPAP and post-LT survival, which had an inflection point of 35 mm Hg. Recipients with ΔmPAP ≦35 mm Hg had higher mortality rate calculated through the Kaplan-Meier estimator (P = 0.041). Interaction analysis showed that recipients admitted in LT center with high case volume (≥50 cases/year) and ΔmPAP >35 mm Hg had better long-term survival. The trend was reversed in recipients who were admitted in LT center with low case volume (<50 cases/year). Conclusions: The relationship between ΔmPAP and post-LT survival was nonlinear. Optimal perioperative ECMO management strategy with experienced team is further warranted.

12.
Front Artif Intell ; 4: 759022, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34589702

RESUMO

[This corrects the article DOI: 10.3389/frai.2021.684609.].

13.
Front Artif Intell ; 4: 684609, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34179769

RESUMO

The worldwide rapid spread of the severe acute respiratory syndrome coronavirus 2 has affected millions of individuals and caused unprecedented medical challenges by putting healthcare services under high pressure. Given the global increase in number of cases and mortalities due to the current COVID-19 pandemic, it is critical to identify predictive features that assist identification of individuals most at-risk of COVID-19 mortality and thus, enable planning for effective usage of medical resources. The impact of individual variables in an XGBoost artificial intelligence model, applied to a dataset containing 57,390 individual COVID-19 cases and 2,822 patient deaths in Ontario, is explored with the use of SHapley Additive exPlanations values. The most important variables were found to be: age, date of the positive test, sex, income, dementia plus many more that were considered. The utility of SHapley Additive exPlanations dependency graphs is used to provide greater interpretation of the black-box XGBoost mortality prediction model, allowing focus on the non-linear relationships to improve insights. A "Test-date Dependency" plot indicates mortality risk dropped substantially over time, as likely a result of the improved treatment being developed within the medical system. As well, the findings indicate that people of lower income and people from more ethnically diverse communities, face an increased mortality risk due to COVID-19 within Ontario. These findings will help guide clinical decision-making for patients with COVID-19.

14.
Ann Med Surg (Lond) ; 62: 53-64, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33489117

RESUMO

BACKGROUND: Breast cancer disease is the most common cancer in US women and the second cause of cancer death among women. OBJECTIVES: To compare and evaluate the performance and accuracy of the key supervised and semi-supervised machine learning algorithms for breast cancer prediction. MATERIALS AND METHODS: We have used nine machine learning classification algorithms for supervised (SL) and semi-supervised learning (SSL): 1) Logistic regression; 2) Gaussian Naive Bayes; 3) Linear Support vector machine; 4) RBF Support vector machine; 5) Decision Tree; 6) Random Forest; 7) Xgboost; 8) Gradient Boosting; 9) KNN. The Wisconsin Diagnosis Cancer dataset was used to train and test these models. To ensure the robustness of the model, we have applied K-fold cross-validation and optimized hyperparameters. We have evaluated and compared the models using accuracy, precision, recall, F1-score, and ROC curves. RESULTS: The results of all models are inspiring using both SL and SSL. The SSL has high accuracy (90%-98%) with just half of the training data. The KNN model for the SL and logistic regression for the SSL achieved the highest accuracy of 98. CONCLUSION: The accuracies of SSL algorithms are very close to the SL algorithms. The accuracies of all models are in the range of 91-98%. SSL is a promising and competitive approach to solve the problem. Using a small sample of labeled and low computational power, the SSL is fully capable of replacing SL algorithms in diagnosing tumor type.

15.
Comput Struct Biotechnol J ; 18: 668-675, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32257050

RESUMO

Microsatellite instability (MSI) is a genomic property of the cancers with defective DNA mismatch repair and is a useful marker for cancer diagnosis and treatment in diverse cancer types. In particular, MSI has been associated with the active immune checkpoint blockade therapy response in cancer. Most of computational methods for predicting MSI are based on DNA sequencing data and a few are based on mRNA expression data. Using the RNA-Seq pan-cancer datasets for three cancer cohorts (colon, gastric, and endometrial cancers) from The Cancer Genome Atlas (TCGA) program, we developed an algorithm (PreMSIm) for predicting MSI from the expression profiling of a 15-gene panel in cancer. We demonstrated that PreMSIm had high prediction performance in predicting MSI in most cases using both RNA-Seq and microarray gene expression datasets. Moreover, PreMSIm displayed superior or comparable performance versus other DNA or mRNA-based methods. We conclude that PreMSIm has the potential to provide an alternative approach for identifying MSI in cancer.

16.
Comput Struct Biotechnol J ; 18: 427-438, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32153729

RESUMO

Drug combinations are frequently used for the treatment of cancer patients in order to increase efficacy, decrease adverse side effects, or overcome drug resistance. Given the enormous number of drug combinations, it is cost- and time-consuming to screen all possible drug pairs experimentally. Currently, it has not been fully explored to integrate multiple networks to predict synergistic drug combinations using recently developed deep learning technologies. In this study, we proposed a Graph Convolutional Network (GCN) model to predict synergistic drug combinations in particular cancer cell lines. Specifically, the GCN method used a convolutional neural network model to do heterogeneous graph embedding, and thus solved a link prediction task. The graph in this study was a multimodal graph, which was constructed by integrating the drug-drug combination, drug-protein interaction, and protein-protein interaction networks. We found that the GCN model was able to correctly predict cell line-specific synergistic drug combinations from a large heterogonous network. The majority (30) of the 39 cell line-specific models show an area under the receiver operational characteristic curve (AUC) larger than 0.80, resulting in a mean AUC of 0.84. Moreover, we conducted an in-depth literature survey to investigate the top predicted drug combinations in specific cancer cell lines and found that many of them have been found to show synergistic antitumor activity against the same or other cancers in vitro or in vivo. Taken together, the results indicate that our study provides a promising way to better predict and optimize synergistic drug pairs in silico.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA