Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Anatol J Cardiol ; 27(11): 657-663, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37624075

RESUMO

BACKGROUND: The aim of this study was to evaluate the relationship between risk factors causing cardiovascular diseases and their importance with explainable machine learning models. METHODS: In this retrospective study, multiple databases were searched, and data on 11 risk factors of 70 000 patients were obtained. Data included risk factors highly associated with cardiovascular disease and having/not having any cardiovascular disease. The explainable prediction model was constructed using 7 machine learning algorithms: Random Forest Classifier, Extreme Gradient Boost Classifier, Decision Tree Classifier, KNeighbors Classifier, Support Vector Machine Classifier, and GaussianNB. Receiver operating characteristic curve, Brier scores, and mean accuracy were used to assess the model's performance. The interpretability of the predicted results was examined using Shapley additive description values. RESULTS: The accuracy, area under the curve values, and Brier scores of the Extreme Gradient Boost model (the best prediction model for cardiovascular disease risk factors) were calculated as 0.739, 0.803, and 0.260, respectively. The most important risk factors in the permutation feature importance method and explainable artificial intelligence-Shapley's explanations method are systolic blood pressure (ap_hi) [0.1335 ± 0.0045 w (weight)], cholesterol (0.0341 ± 0.0022 w), and age (0.0211 ± 0.0036 w). CONCLUSION: The created explainable machine learning model has become a successful clinical model that can predict cardiovascular patients and explain the impact of risk factors. Especially in the clinical setting, this model, which has an accurate, explainable, and transparent algorithm, will help encourage early diagnosis of patients with cardiovascular diseases, risk factors, and possible treatment options.


Assuntos
Inteligência Artificial , Doenças Cardiovasculares , Humanos , Adulto , Doenças Cardiovasculares/epidemiologia , Estudos Retrospectivos , Fatores de Risco , Algoritmos
2.
Chem Biol Drug Des ; 102(1): 217-233, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37105727

RESUMO

Recently, artificial intelligence (AI) techniques have been increasingly used to overcome the challenges in drug discovery. Although traditional AI techniques generally have high accuracy rates, there may be difficulties in explaining the decision process and patterns. This can create difficulties in understanding and making sense of the outputs of algorithms used in drug discovery. Therefore, using explainable AI (XAI) techniques, the causes and consequences of the decision process are better understood. This can help further improve the drug discovery process and make the right decisions. To address this issue, Explainable Artificial Intelligence (XAI) emerged as a process and method that securely captures the results and outputs of machine learning (ML) and deep learning (DL) algorithms. Using techniques such as SHAP (SHApley Additive ExPlanations) and LIME (Locally Interpretable Model-Independent Explanations) has made the drug targeting phase clearer and more understandable. XAI methods are expected to reduce time and cost in future computational drug discovery studies. This review provides a comprehensive overview of XAI-based drug discovery and development prediction. XAI mechanisms to increase confidence in AI and modeling methods. The limitations and future directions of XAI in drug discovery are also discussed.


Assuntos
Algoritmos , Inteligência Artificial , Sistemas de Liberação de Medicamentos , Descoberta de Drogas , Aprendizado de Máquina
3.
Comput Methods Programs Biomed ; 233: 107492, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36965300

RESUMO

BACKGROUND AND PURPOSE: COVID-19, which emerged in Wuhan (China), is one of the deadliest and fastest-spreading pandemics as of the end of 2019. According to the World Health Organization (WHO), there are more than 100 million infectious cases worldwide. Therefore, research models are crucial for managing the pandemic scenario. However, because the behavior of this epidemic is so complex and difficult to understand, an effective model must not only produce accurate predictive results but must also have a clear explanation that enables human experts to act proactively. For this reason, an innovative study has been planned to diagnose Troponin levels in the COVID-19 process with explainable white box algorithms to reach a clear explanation. METHODS: Using the pandemic data provided by Erzurum Training and Research Hospital (decision number: 2022/13-145), an interpretable explanation of Troponin data was provided in the COVID-19 process with SHApley Additive exPlanations (SHAP) algorithms. Five machine learning (ML) algorithms were developed. Model performances were determined based on training, test accuracies, precision, F1-score, recall, and AUC (Area Under the Curve) values. Feature importance was estimated according to Shapley values by applying the SHApley Additive exPlanations (SHAP) method to the model with high accuracy. The model created with Streamlit v.3.9 was integrated into the interface with the name CVD22. RESULTS: Among the five-machine learning (ML) models created with pandemic data, the best model was selected with the values of 1.0, 0.83, 0.86, 0.83, 0.80, and 0.91 in train and test accuracy, precision, F1-score, recall, and AUC values, respectively. As a result of feature selection and SHApley Additive exPlanations (SHAP) algorithms applied to the XGBoost model, it was determined that DDimer mean, mortality, CKMB (creatine kinase myocardial band), and Glucose were the features with the highest importance over the model estimation. CONCLUSIONS: Recent advances in new explainable artificial intelligence (XAI) models have successfully made it possible to predict the future using large historical datasets. Therefore, throughout the ongoing pandemic, CVD22 (https://cvd22covid.streamlitapp.com/) can be used as a guide to help authorities or medical professionals make the best decisions quickly.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , Algoritmos , Produtos de Degradação da Fibrina e do Fibrinogênio
4.
Arab J Sci Eng ; 47(2): 2359-2379, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34611504

RESUMO

Social media has affected people's information sources. Since most of the news on social media is not verified by a central authority, it may contain fake news for various reasons such as advertising and propaganda. Considering an average of 500 million tweets were posted daily on Twitter alone in the year of 2020, it is possible to control each share only with smart systems. In this study, we use Natural Language Processing methods to detect fake news for Turkish-language posts on certain topics on Twitter. Furthermore, we examine the follow/follower relations of the users who shared fake-real news on the same subjects through social network analysis methods and visualization tools. Various supervised and unsupervised learning algorithms have been tested with different parameters. The most successful F1 score of fake news detection was obtained with the support vector machines algorithm with 0.9. People who share fake/true news can help in the separation of subgroups in the social network created by people and their followers. The results show that fake news propagation networks may show different characteristics in their own subject based on the follow/follower network.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA